The optimization problem is given by \begin{equation} \begin{aligned} F(\boldsymbol{\tau})&=\sum\limits_{m=1}^{M} \frac{\left( \tau_{m-1} e^{-\frac{\tau_{m-1}}{\lambda}} -\tau_{m} e^{-\frac{\tau_{m}}{\lambda}} \right)^2}{e^{-\frac{\tau_{m-1}}{\lambda}} -e^{-\frac{\tau_{m}}{\lambda}}}\\ &\text{s.t.}\tau_{0}<\tau_{1}<\cdots<\tau_{M}, \end{aligned} \end{equation} where \begin{align} \lambda&\geq1,\\ \tau_0&=0,\\ \tau_{M}&=\infty,\\ M&\geq2. \end{align}
I have verified that the objective function may be non-concave (When $M=2$, I find that the second-order derivative of $F(\tau)$ is not always smaller than $0$).
Now, I want to see is there still a closed-form solution to this optimization problem?
Let's note first of all that the problem has some scaling invariances. The first one is obvious: $F_\lambda(\tau)=\lambda^2F_1(\lambda^{-1}\tau)$, so it is enough to consider the case $\lambda=1$.
Now denote $x_k=e^{-\tau_k}$. Then the problem becomes to maximize $$ \sum_{m=1}^{M}\frac{(x_{m-1}\log x_{m-1}-x_{m}\log x_{m})^2 }{x_{m-1}-x_m} $$ subject to $1=x_0>x_1>\dots>x_M=0$.
This reverse enumeration is somewhat inconvenient, so we will put $y_m=x_{M-m}$ and arrive at a bit more natural maximization of $$ \sum_{m=1}^{M}\frac{(y_{m}\log y_{m}-y_{m-1}\log y_{m-1})^2 }{y_m-y_{m-1}} $$ subject to $0=y_0<y_1<\dots<y_M=1$.
Now note that we can consider the same functional on $[0,a]$ instead of $[0,1]$ and, if we denote the corresponding points $ay_m$ (so $y_m$ run over $[0,1]$ as before) and put $t=\log a$, we'll get $$ \frac 1a\sum_{m=1}^{M}\frac{[(y_{m}\log y_{m}-y_{m-1}\log y_{m-1})+t(y_m-y_{m-1})]^2 }{y_m-y_{m-1}} $$ to maximize.
Note, however, that if you open the brackets in the numerator (keeping the parentheses), the sums with $t$ will telescope, so their value is independent of the point choice, and we'll be left with exactly the problem we initially had on $[0,1]$.
This suggest the strategy to define recursively an increasing sequence of points $Y_0=0, Y_1=1, Y_2, Y_3,\dots$ such that for all $m\ge 1$ the point $Y=Y_m$ maximizes $$ \frac{(Y\log Y-Y_{m-1}\log Y_{m-1})^2}{Y-Y_{m-1}}+\frac{(Y_{m+1}\log Y_{m+1}-Y\log Y)^2}{Y_{m+1}-Y}\,. $$ If we show that $Y_{m+1}$ is uniquely defined from $Y_{m-1}$ and $Y_m$ by this condition, we will be able to argue that $(Y_0,\dots,Y_M)$ is the unique critical point for the problem on the interval $[0,Y_M]$ and, therefore, a maximizer there, after which we can transfer it back to $[0,1]$ by putting $y_m=\frac{Y_m}{Y_M}$ (and then returning to $x$ and $\tau$ in an obvious way).
Differentiating in $Y$ at $Y=Y_m$, we get the equation $$ 2\frac{(Y_{m+1}\log Y_{m+1}-Y_m\log Y_m)}{Y_{m+1}-Y_m}(1+\log Y_m)- \left[\frac{Y_{m+1}\log Y_{m+1}-Y_m\log Y_m}{Y_{m+1}-Y_m}\right]^2 \\ = 2\frac{(Y_{m-1}\log Y_{m-1}-Y_m\log Y_m)}{Y_{m-1}-Y_m}(1+\log Y_m)- \left[\frac{Y_{m-1}\log Y_{m-1}-Y_m\log Y_m}{Y_{m-1}-Y_m}\right]^2\, $$ which is quadratic with respect to the ratio $R=\frac{Y_{m+1}\log Y_{m+1}-Y_m\log Y_m}{Y_{m+1}-Y_m}$.
One obvious (and useless) root is $R=r_m=\frac{Y_{m-1}\log Y_{m-1}-Y_m\log Y_m}{Y_{m-1}-Y_m}$ (i.e., $Y_{m+1}=Y_{m-1}$, which is excluded by the monotonicity condition), while the other, larger, root is $$ R=R_m=2(1+\log Y_m)-\frac{Y_{m-1}\log Y_{m-1}-Y_m\log Y_m}{Y_{m-1}-Y_m}\,. $$ Note that the function $Y\mapsto Y\log Y$ is strictly convex and grows faster than linear, so the equation $$ \frac{Y\log Y-Y_m\log Y_m}{Y-Y_m}=R_m $$ has, indeed, a unique solution on $(Y_m,\infty)$. One can easily find that solution by the Newton method in under 10 iterations if one starts with a reasonable enough initial guess and the asymptotic analysis shows that a reasonable enough guess is $(1+\frac 3m)Y_{m}$. So you can get the exact (well, up to the machine precision) answer almost instantaneously.
If you are interested in super-large $M$ for which the machine precision fails, the value of the maximum is asymptotically $1-\frac 9{4M^2}+O(M^{-3})$ and the ratio $\frac{Y_m}{m^3}$ tends to a finite limit, so, you can just extrapolate.
Feel free to ask questions if anything is unclear :-).