Given $n \in \mathbb{N}$, $e_i \in \mathbb{N_{0}}$, $l_i \in \mathbb{N}$, $b_i \in \{0,1,2\}$ where $1 \le i \le n, i \in \mathbb{N}$.
$$ \sum_{i=1}^n{\min\left(e_i + \frac{b_i \cdot d}{4}, l_i\right)} = n \cdot d $$
How can I determine $d$ (in a program, with two decimals precision)?
The maximum of the left-hand sum is $L = \sum_{i=0}^n{l_i}$. So a brute-force approach would be to simply increment $d$ from $0$ up to $\frac{L}{n}$ (with $L = \sum_{i=0}^n{l_i}$) in steps of my desired (finite) precision and measure the error of the two sides of the equation, then use the best result.
A better approach would be to use some optimization library; but I am not sure what kind of optimization problem this is and what libraries and algorithms are suited here.
But preferably: Is there an analytical solution? Or at least, a more elegant approach?
Subtract $L$ from both sides, and you get $$\sum_{i=1}^n\min\left(e_i-l_i + \frac{b_i \cdot d}{4}, 0\right) = n \cdot d - L$$ Now break the indices $1, \dots, n$ up into two sets
Let $K = L + \sum_{i\in B}\min(e_i-l_i, 0)$. We have $$\sum_{i\in A} \frac{b_i \cdot d}{4} = n \cdot d - K - \sum_{i\in A}(e_i-l_i)$$ $$d = \dfrac{K + \sum_{i\in A}(e_i-l_i)}{n - \sum_{i\in A}\frac{b_i}4}$$
This suggests the strategy: