Eigenvalues of a symmetric matrix with Lagrange multipliers

4.9k Views Asked by At

Problem: Using Lagrange multipliers, prove that all symmetric matrices $A \in \mathbb{R}^{n \times n}$ have all real eigenvalues.

Proof: Consider $f: \mathbb{R}^n \rightarrow \mathbb{R}$ defined by $f(x) = \langle Ax,x \rangle$, where $\langle \cdot,\cdot \rangle$ is the usual intern product of $\mathbb{R}^n$ and $S^{n-1} = \{ x \in \mathbb{R}^n : \| x \| = 1 \}$. I have found that $f'(x) = 2\langle Ax,h \rangle$, but I'm stuck here.

I appreciate all your comments. Thanks!!!

1

There are 1 best solutions below

0
On BEST ANSWER

I had not seen this before, it is quite impressive. The argument as I understand it is as follows.

Given a symmetric real matrix $A$, our first plan of action is to find an eigenvector $v$ with a real eigenvalue. We do so as follows, consider the map $f: S^{n-1} \to \mathbb{R}$ via $f(x) = \langle Ax , x \rangle$. This map is continuous (it is polynomial in the $x_i$) and even more it is differentiable.

Using (strongly) that $A$ is symmetric, one finds that $f'(x) = 2Ax$. Using Langrange multipliers with the constraint that $x \in S^{n-1}$ one finds that critical points satisfy $2Ax = 2\lambda x$, that is $x$ is an eigenvector with (obviously real) eigenvalue $\lambda$.

At this point, we just have to convince ourselves that $f$ has some critical point. Since $f$ is a continuous map from a closed and bounded set, by the natural generalization of the extreme value theorem from calculus (the continuous image of a compact set is closed and bounded and the range is $\mathbb{R}$), we get that $f$ obtains its maximum and minimum, and of the two say $f(v)$ is maximal (or minimal, it doesn't matter). Then $v$ is a critical point and thus is an eigenvector of $A$ ($v \neq0$ since it is in $S^{n-1}$).

So we have found $v$ is an eigenvector of $A$ with a real eigenvalue. To finish the proof, use the standard trick to restrict to $v^{\perp}$, which is $A$-invariant. Then apply the same argument to $v^{\perp}$ to find the next eigenvector and so on.

I will remark that the above proof breaks down completely for $n =1$ (why?). Of course this case is easy.

In hindsight maybe the fact that this proof exists is not so surprising. We know $A$ is up to an orthogonal transformation (which preserves distance) a diagonal matrix, and for a diagonal matrix, $D$, if your goal is to maximize (minimize) $x^T Dx$ where $x \in S^{n-1}$, it is clear that you should choose the standard basis vector in the direction of the largest (smallest) diagonal entry of $D$. The above proof finds this standard basis vector in a different basis.