Proof of the bordered Hessian test

80 Views Asked by At

Let $f : \mathbb{R}^n \to \mathbb{R}$ and $g : \mathbb{R}^n \to \mathbb{R}^k$, $1 \leq k < n$, be functions of class $C^2$, with $g = (g_1, \dots, g_k)$. For a regular value $c = (c_1, \dots, c_k)$ of $g$, let $M = g^{-1}(c)$, which is a submanifold of dimension $n-k$ of $\mathbb{R}^n$. Suppose further that $0 \in M$ and the tangent space of $M$ at $0$ is the subspace $\mathbb{R}^{n-k} \times 0$.

If $0$ is a critical point of $F := f\vert_M : M \to \mathbb{R}$, then there exist scalars $\lambda_1, \dots, \lambda_k$ such that

$$\nabla f(0) = \sum_{i=1}^k \lambda_i \nabla g_i(0).$$

This is what the Lagrange Multiplier Theorem says. Now let $L : \mathbb{R}^n \to \mathbb{R}$, the Lagrangian, be given by

$$L(x) = f(x) - \sum_{i=1}^k \lambda_i (g_i(x) - c_i).$$

Notice that

$$\nabla L (x) = \nabla f(x) - \sum_{i=1}^k \lambda_i \nabla g_i(x),$$

so $\nabla L(0) = 0$ and $0$ is a critical point of $L$.

For $x \in M$ and close to the origin, notice that $H(x) = f(x)$. Writing a Taylor expansion for $L$ gives:

$$f(x) = L(x) = L(0) + q(x) + r(x) = f(0) + q(x) + r(x),$$

where $q : \mathbb{R}^n \to \mathbb{R}$ is the quadratic form given by

$$q(x) = \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 L}{\partial x_i \partial x_j}(0) x_i x_j$$

and

$$\lim_{x \to 0} \frac{r(x)}{\vert x \vert^2} = 0.$$

I want to prove that if $q$ is positive definite on $\mathbb{R}^{n-k} \times 0$, then $0$ is a local minimum of $F$. The following argument is due to C. H. Edwards (Theorem 8.9, page 154 of his book "Advanced Calculus of Several Variables").

It suffices to find $\delta > 0$ such that

$$0 < \vert x \vert < \delta \text{ and } x \in M \implies \frac{q(x) + r(x)}{\vert x \vert^2} > 0.$$

To do this, let

$$m = \inf \{ q(v,v) : v \in \mathbb{R}^{n-k} \times 0, \vert v \vert = 1 \}.$$

Since $q$ is positive definite by hypothesis and the unit sphere is compact, $m$ is a positive number. Noting that

$$\frac{q(x) + r(x)}{\vert x \vert^2} = q \left( \frac{x}{\vert x \vert} \right) + \frac{r(x)}{\vert x \vert^2}$$

we let $\delta > 0$ be so small that

$$\frac{r(x)}{\vert x \vert^2} < \frac{m}{2}$$

and also so small that $0 < \vert x \vert < \delta$ and $x \in M$ imply that $\frac{x}{\vert x \vert}$ is sufficiently near to the unit sphere in $\mathbb{R}^{n-k} \times 0$ that

$$q \left( \frac{x}{\vert x \vert} \right) > \frac{m}{2}.$$

Then, with these choices, the Taylor expansion of $L$ around $0$ shows that $0$ is a local minimum for $F$.

My question is: why can we choose $\delta > 0$ so that the last condition (the one on $q$) is be satisfied?

1

There are 1 best solutions below

4
On

We know by the definition of $m$ that $q(x/|x|)\ge m$ when $x\in\Bbb R^{n-k}\times\{0\}$. By continuity — hence uniform continuity — of $q$ on the unit sphere $S$ in $\Bbb R^{n-k}\times\{0\}$, given $\epsilon>0$, there is a $\delta>0$ so that whenever $y$ is within distance $\delta$ of $S$, $q(y)$ is within $\epsilon$ of the value of $q$ at the point of $S$ closest to $y$. That means $q(y)>m-\epsilon$ for every such $y$. Now just choose $\epsilon\le m/2$.