I want to solve the following problem: Maximize $\sum_{i=1}^n\log(1+\lambda_i^2)$ subject to $\lambda_i >0$ and $\sum_{i=1}^n\lambda_i = M$. I was wondering how I could cast it as a convex problem.
One thought came to mind of treating $\lambda_i^2$ as variables instead of $\lambda_i$. To modify the sum constraint, I could only think of using the Cauchy-Schwarz inequality to get $\sum_{i=1}^n\lambda_i^2 \geq \frac{M^2}{n}$. (Additionally, we always have: $\sum_{i=1}^n\lambda_i^2 \leq M^2$.)
My guess (or hope) is that the solution is $\lambda_i = \frac{M}{n}$ for all $i$. Can anyone see this?

EDIT: Sorry I was too quick to jump this problem. I have indicated the step where this proof fails and have not been able to recover it, apologies.
~~~~~~~~
I haven't done any convex analysis, but have you considered using Lagrange Multipliers? Let $$f(\lambda_1,\ldots,\lambda_n) := \sum_{i=1}^n \log(1+\lambda_i^2)$$ and $$g(\lambda_1,\ldots, \lambda_n) := \sum_{i=1}^n \lambda_i.$$
Then we must have $\nabla f = \mu \nabla g$. Calculating $$ \frac{\partial f}{\partial \lambda_k} = \frac{2\lambda_k}{1+\lambda_k^2} $$ and $$ \frac{\partial g}{\partial \lambda_k} = 1. $$
Thus we must have $$ \frac{2\lambda_k}{1+\lambda_k^2} = \mu\cdot 1 = \mu $$ for all $k$.
(This step is unjustified/invalid) It follows that at the maximum we will have $$ \lambda_1=\cdots = \lambda_n. $$ Since they are all the same, let $\lambda:=\lambda_1$ be the common value. Then the constraint may be rewritten as $$ n\lambda = M \iff \lambda = \frac{M}{n} $$ so you were correct in your initial analysis. You must also check that this is in fact a maximum and not a minimum, I will leave this to you though.