Suppose that I am asked to determine the maximum and the minimum of a continuous and differentiable function $$ f(x, y, z)$$ with a constraint $$ g(x, y, z)=0.$$
I have some trouble understanding the method, even more in this case when it's harder to picture what the function $f$ looks like.
So far I have proven that when $f$ is restricted to those points for which $g(x,y,z)=0$, if it has a local maximum/minimum, the gradient of $f$ is parallel to the gradient of $g$ at that point, i.e. $$ \nabla f-\lambda \nabla g=0. $$ My question is: is it enough just to consider the critical points, i.e. those $(x, y, z)$ for which the gradient of Lagrange function $$ L \left(x, y, z, \lambda \right)=f(x, y, z)-\lambda g(x, y, z) $$ is zero, in order to find the maximum and minimum of $f$? Many examples of optimization exercises I have seen only do that.
Yes, this is enough (assuming everything is differentiable and nice).
That's because if we take the gradient of the function $L$, the first three components together become $$\nabla f(x,y,z)-\lambda\nabla g(x,y,z)$$and the final and forth component (i.e. $\frac{\partial L}{\partial\lambda}$) becomes $g(x,y,z)$. We have a solution iff both of these are zero. Which is to say, iff the gradient of $L$ is zero.