I have a question about the general procedures to find the max/min points for multivariable functions, would really help if somebody could please clarify my doubts.
So for single variable function, it's pretty straightforward. We use the FOC to find critical points that satisfy f'(x)=0, and then use SOC to figure out whether it's a max or min by checking the sign of f''(x). i.e if f''(x)>0, the function is concave up, hence a minimum point, or if f''(x)<0, the function is concave down, hence a maximum point.
Now when extending this idea to multivariable functions, we'd still first use the FOC to find all critical points that may or may not be the actual max/min points; but when we subsequently use the SOC, what's to check here now? Do I simply check whether the Hessian of the matrix is positive/negative definite or do I check the sign of every second order derivative of f (Fxx, Fyy, Fxy, Fyx etc etc) at those critical points? Or are these two mechanisms effectively the same?
Thanks in advance!
Checking the sign is definitely not the way to go.
$$ \left( \begin {matrix} 2 & 2 \\ 2 & 1 \end {matrix} \right)$$
is indefinite, although its entries all have the same sign. There are some shortcuts in two dimensions, but in general, you have to know whether the Hessian is definite or not. Given a critical point of a $C^2$ function, that the definiteness of the Hessian is a sufficient condition can be seen by employing the Cauchy form of the Taylor remainder
$$f(x_0 + h) - f(x_0) = \underbrace{ \langle \nabla{f}|_{x_0}, h \rangle }_{\text{This term is 0}} + \frac12 \langle H|_{x^*}h, h\rangle$$
where $H|_{x^*}$ is the Hessian evaluated at a point $x^*$ between $x_0$ and $x_0 + h$. Since $f$ is $C^2$, if the Hessian is definite at $x_0$, it will be definite at $x^*$ also, as long as $h$ is small enough.