generalization of 2nd derivative test to multi dimensional when the hessian is inonclusive

204 Views Asked by At

We all know the 2nd derivative test in its original form, if $f'(x)=0$, then if $f''(x)<0$ the point is max, and if $f''(x)>0$ the point is min. We also know the generalization for the case is inconclusive with one variable: (I) https://en.wikipedia.org/wiki/Derivative_test#Higher-order_derivative_test

We also know the generalization for multi variable:(II) https://en.wikipedia.org/wiki/Second_partial_derivative_test#Functions_of_many_variables

The question is if there is a generalization of the two, say for multivariable if none of the conditions are met, then we can take the next derivative until we find a derivative which is not zero and use the conditions in (II) to decide whether it's a max or min.

So prove or disprove, one can just take nth derivative and check using the II conditions, that if an extremum is a max or min, or disprove via a counterexample .

2

There are 2 best solutions below

0
On BEST ANSWER

There cannot be such algorithm in general (As long as NP!=P) Suppose there was, let's call it A. the number of condition is constant. calculating matrix multiplication is polynomial. So A is polynomial. calculating n-th derivative is polynomial. Now use the algorithm A to find the if it is the minimum or not. so our algo is in P. But according to https://arxiv.org/abs/1012.0729 and other papers, finding local minimum is NP hard. so one of the following:

  1. you prove P=NP or
  2. A doesn't exist.
1
On

Sure! The general form of the Taylor's Theorem can be written as

$$\begin{aligned} f(x)= &f(x_0) +f(x_0)⋅(x-x_0) +\tfrac{1}{2}^2 f(x_0)⋅(x-x_0)^{⊗2}+⋯ \\&⋯+\tfrac{1}{k!}^k f(x_0)⋅(x-x_0)^{⊗k} +\tfrac{1}{(k+1)!}R_{k+1}(x)⋅(x-x_0)^{⊗ (k+1)} \end{aligned}$$

In the case when $f\colon H→ℝ$ is scalar, we may even express it as

$$\begin{aligned} f(x)= &⟨f(x_0)∣(x-x_0)^{⊗0}⟩_{H^{⊗0}} \\&+⟨f(x_0)∣(x-x_0)⟩_{H^{⊗1}} \\&+\tfrac{1}{2}⟨^2 f(x_0)∣(x-x_0)^{⊗2}⟩_{H^{⊗2}} \\&+⋯ \\&+\tfrac{1}{k!}⟨^k f(x_0)∣(x-x_0)^{⊗k}⟩_{H^{⊗k}} \\&+\tfrac{1}{(k+1)!}⟨R_{k+1}(x)∣(x-x_0)^{⊗ (k+1)}⟩_{H^{⊗(k+1)}} \end{aligned}$$

Using the induced inner product of the tensor product of Hilbert spaces. For example: the Frobenius inner product for $m×n$ matrices is the induces inner product on $ℝ^m ⊗ ℝ^n$. Convince yourself that the standard Hessian term $xᵀf(x_0)x$ is identical to $⟨^2f(x_0), x^{⊗2}⟩_{ℝ^n⊗ℝ^n}$

We say a tensor $∈H^{⊗2n}$ is positive definite if and only if $⟨∣x^{⊗2n}⟩_{H^{⊗2n}}≥0$ for all $x$ with equality if and only if $x=0$.

Theorem Assume $f\colon ℝ^n →ℝ$ is $2n$ times continuously differentiable and $^kf(x_0)=0$ for $k=1…(2n-1)$ and $^{2n}f(x_0)$ is positive definite. Then $f$ has a strict local minimum at $x_0$.

Proof: It is more or less a replication of the regular proof. By Taylors formula:

$$\begin{aligned} f(x) - f(x_0)= \tfrac{1}{(2n)!}^{2n} f(x_0)⋅(x-x_0)^{⊗2n} +\tfrac{1}{(2n+1)!}R_{2n+1}(x)⋅(x-x_0)^{⊗ (2n+1)} \end{aligned}$$

Since $^{2n} f(x_0)$ is positive definite, the first term is always positive for $x≠x_0$. But it also dominates $R$ asymptotically as $x→x_0$. Hence, there must be an ε-ball $U_ε(x_0)$ around $x_0$ for which the RHS is positive for all $x≠x_0$. Hence $f(x)>f(x_0)$ locally.