I have optimization that goes the followings:
\begin{equation} \begin{aligned} &\max_{x_1,\dots,x_k} &&f_1(x_1)+f_2(x_2)+\dots+f_k(x_k)\\ &\text{subject to} &&x_1 + \dots + x_k = M\\ &&& x_1,\dots,x_k > 0 \end{aligned} \end{equation}
All $f_i(x_i)$ are strictly concave and continuous real function on the feasible region. I just wonder if we can see this optimization problem as function on $M$. In other word, to see it as a continuous mapping $H$ that map a given constraint $M$ to the optimal value.
If so, could we property apply property such as derivative/gradient on this function $H$?
Dynamic programming often applies to problems with similar structure to this. Let $g(k,M)$ denote the maximum value and note the recursion: $$g(k,M) = \max_{0<y<M} \{f_k(y) + g(k-1,M-y)\}$$