My book optimises a type of functions using the lagrange method. From calculus I remember that we had to check the boundary when using lagrange, because it only gave local max, but it is not mentioned in the book. I see from this link http://planetmath.org/localminimumofconvexfunctionisnecessarilyglobal that it might be global after all since we are using a concave function. But since the probelm is more involved than a single variable, and we use Lagrange I am not sure it will work.
We want to maximize
$F(\textbf{x})=\Sigma_{i=1}^nk_i*u(x_i)$,
$k_i$ is a real number
$F: \mathbb{R}^n\rightarrow \mathbb{R}, u: \mathbb{R}\rightarrow\mathbb{R}$
constrained to:
$g(\textbf{x})=c, g: \mathbb{R}\rightarrow \mathbb{R}$, g is always linear and on the form: $g(\textbf{x})=\Sigma_{i=1}^nt_i*x_i$
It is also mentioned that u is concave, and strictly increasing.
Now, when we use Lagrange method on this problem and find a local max, will it infact then be a global max? The book never checks if the value found using lagrange is maximum. I mean we have found a local max using Lagrange on the set we are constrained to, but does infact in some way that the function u is concave ensure that the value is a global max?
PS: This problem is from mathematical finance(portfolio-optimisation).
Yes:
A linear combination of concave functions is also concave.
A concave function restricted to a linear space (e.g. the hyperplane $g = c$) is concave.
A local maximum of a concave function is a global maximum.