I want to solve the following optimization problem
$$\begin{array}{ll} \text{maximize} & \displaystyle\sum_{i=1}^n \log_{2}(1+a_i x_i)\\ \text{subject to} & \displaystyle\sum_{i=1}^n (c_i - x_i) a_i = b\\ & 0 \leq x_i \leq c_i\end{array}$$
where $a_i \geq a_2 \geq \dots \geq a_n$, $b \in \mathbb R^+$ and $c_1 = c_2 = \dots = c_n$ are given.
I tried to solve this problem using Lagrange multipliers
$$\mathcal{L}(x,\lambda,\mu,\nu)=\sum_{i=1}^n\log_{2}(1+a_ix_i) + \lambda \left( \sum_{i=1}^n(c_i-x_i)a_i - b \right) - \mu_i (x_i-c_i) + \nu_ix_i$$
We need to have $\frac{d\mathcal{L}}{dx_i}=0$; therefore
$$\sum_{i=1}^n \frac{a_i}{\ln(2)\times(1+a_ix_i)}-\lambda a_i-\mu_{i}+\nu_{i}=0$$
In particular, my problem always has
$$\sum_{i=1}^{n-1} a_i c_i < b < \sum_{i=1}^n a_i c_i$$
How can I obtain the optimal values of $x_i$
I think your best approach is duality. First, note that $\log_2$ and $\ln$ differ by a multiplicative constant $\ln(2)$. Thus, we can minimize with $\ln$ and obtain the same optimal value, and I will use $\ln$ instead of $\log_2$ in my derivations. Second, I will treat it as a minimization problem, and therefore write $-\ln$ instead of $\ln$.
Giving a Lagrange multiplier only to the equality constraint, you obtain the Lagrangian $$ \begin{aligned} L(x, \lambda) &= -\sum_{i=1}^n \ln(1+a_i x_i) + \lambda \left[ \sum_{i=1}^n (c_i - x_i) a_i - b \right] \\ &= \sum_{i=1}^n \left[-\ln(1+a_i x_i) - \lambda a_i x_i \right] + \lambda \Bigl[\underbrace{-b + \sum_{i=1}^n a_i c_i}_{C} \Bigr] \end{aligned} $$ Since the Lagrangian and the constraints $0 \leq x_i \leq c_i$ are both separable in $x_i$, the dual objective is: $$ q(\lambda) = \sum_{i=1}^n \min_{x_i} \biggl\{ -\ln(1+a_i x_i) - \lambda a_i x_i :~ 0 \leq x_i \leq c_i \biggr\} + \lambda C $$ Note, that each of the elements in the sum above is a minimization of a convex function of the form $-\ln(1+ax) -bx$ over an interval, and it is quite easy to find a closed form solution (if you need help, please say so). From this solution you will also obtain a formula from recovering the optimal $x_i$ from the optimal $\lambda$.
Now, your dual problem is the one-dimensional optimization problem $\max_{\lambda} q(\lambda)$, which can be easily solved by a numerical method, such as the Golden Section search. After finding the optimal $\lambda^*$ you can recover $x^*$.
P.S. If you really want a really reliable method in terms of approximating the optimum in terms of the objective value, you can compute an approximate $x_i$ after every iteration of the the Golden Section search, and stop when the primal-dual gap is below some threshold.
Update
Since there are many questions about computing $q(\lambda)$, here are the details. First, I will denote by $g_i(x_i, \lambda)$ the objective inside the minimum. That is, we solve $$ \min_{x_i} \biggl\{g_i(x_i, \lambda) = -\ln(1+a_i x_i) - \lambda a_i x_i :~ 0 \leq x_i \leq c_i \biggr\} $$ The minimum is either a stationary point inside the interior of the interval $I = [0, c_i]$ or one of its endpoints. First, note that if $\lambda \geq 0$ then $g_i$ is a decreasing function, and in this case there are no stationary points. Thus, for any stationary point we must have $\lambda < 0$.
To find a stationary point, we solve: $$ \frac{d g_i}{d x_i} = -\frac{a_i}{1+a_i x_i} - \lambda a_i = 0. $$ Since $a_i > 0$ we can divide both sides by $a_i$, and obtain the equation $$ \frac{1}{1+a_i x_i} = -\lambda $$ Since $\lambda \neq 0$, we take the inverse on both sides, multiply by $a_i$, and obtain $$ 1+a_i x_i = -\frac{1}{\lambda}, $$ which is equivalent to $$ a_i x_i = -1-\frac{1}{\lambda}, $$ or $$ x_i = -\frac{1+\lambda}{a_i \lambda}. $$ Using the fact that $\lambda < 0$, we conclude that the point above is inside the interior of $I$ if and only if $$ -1 < \lambda < -\frac{1}{1 + c_i a_i}, $$ in that case the expression for $x_i$ above is the minimizer, and the minimum is $$ g_i\left(-\frac{1+\lambda}{a_i \lambda}, \lambda\right) = \ln(-\lambda) + \lambda + 1 $$ Otherwise, either the left endpoint $x_i=0$ or the right endpoint $x_i=c_i$ is the minimizer. To summarize, $$ q_i(\lambda) = \min_{x_i} \biggl\{g_i(x_i, \lambda) :~ 0 \leq x_i \leq c_i \biggr\} = \begin{cases} \ln(-\lambda) + \lambda + 1 & -1 < \lambda < -\frac{1}{1 + c_i a_i} \\ \min(0, -\ln(1+a_i c_i)-\lambda a_i c_i) & \text{otherwise} \end{cases} $$ The dual objective is $$ q(\lambda) = \sum_{i=1}^n q_i(\lambda) + \lambda C $$ It is possible to simplify the expressions above, but it is not required to do so in order to write code which evaluates $q(\lambda)$ in order to solve it numerically.