Eigenvalue of $A$ and $A^*$

104 Views Asked by At

Why is it that if $Ax=\lambda x$ and $A^*x=\mu x$ then $\lambda = \overline{\mu}$?

I tried to say $(Ax,x)=(\lambda x,x)=\lambda(x,x)$ and $(Ax,x)=(x,A^*x)=(x,\mu x)=\overline{\mu}(x,x)$. This, I think, shows why the implication holds. But I don't understand why $(x,\mu x)=\overline{\mu}(x,x)$, since I think $(\lambda x,x) = \lambda(x,x)$. I might be completely wrong but I dont understand this. Especially because the "$(\,,)$" notation is just the inner product? So I like to think more in this notation

$$(Ax,x)=(Ax)^Tx=(\lambda x)^Tx=\lambda x^Tx$$

But wouldn't it be $^*$ in this case, not $^T$? I don't understand though, with this notation, where $(Ax,x)=(x,A^*x)$ comes from since

$$(Ax,x)=(Ax)^Tx=x^TA^Tx=(x,A^Tx)=x^TA^*x=x^T\mu x=\mu x^Tx$$

So here $\mu=\lambda$? But I learned $^*$ is pretty much just $^T$ but with the conjugate taken on every entry.

Im obviously missing something, can someone please explain this how the first connects to the second notation? Whenever I read the general inner product I translate it back to just matrix multiplication and stuff, so when I don't understand that one I'm just using the rules like a parrot without any understanding of what's happening.

Please help me see where things are going wrong. Any help better formulating this question is also apreciated.

2

There are 2 best solutions below

7
On

If you are using complex vectors and matrices, you should be using "conjugate transpose" $*$ throughout, and never the usual transpose. So the inner product is $\langle v, w \rangle := v^* w$. (You could also define it as $\langle v, w\rangle := w^* v$ instead, it does not make too much of a difference. Just pick one and stick to it.)

In terms of the inner product, note that we have, in general, $\langle cv, w \rangle = \bar{c} \langle v, w \rangle$ and $\langle v, cw \rangle = c \langle v, w \rangle$. This is called "sesquilinearity", which is part of the definition of an inner product.


For your proof:

In terms of vectors and matrices, we have $$x^* A x = x^* (Ax) = x^* (\lambda x) = \lambda x^* x$$ and $$x^* A x = (A^* x)^* x = (\mu x)^* x = \bar{\mu} x^* x.$$

You can also rewrite this entirely using inner products and applying the sesquilinearity property.

0
On

for a rather simple take, you can use Schur Triangularization and compute the same thing two different ways.
With unitary $V := \bigg[\begin{array}{c|c|c|c}\mathbf x & \mathbf v_2 &\cdots & \mathbf v_{n}\end{array}\bigg]$

$A = VRV^{-1}= VRV^* =V\begin{bmatrix} \lambda & \mathbf b_{n-1}^*\\ \mathbf 0 & \mathbf R_{n-1} \end{bmatrix}V^*$

For simplicity assume $\big \Vert \mathbf x \big \Vert_2=1$, then
$\mu\mathbf x= A^*\mathbf x= V\begin{bmatrix} \overline{\lambda} & \mathbf 0 \\ \mathbf b_{n-1} & \mathbf R_{n-1}^* \end{bmatrix}V^*\mathbf x=V\begin{bmatrix} \overline{\lambda} & \mathbf 0 \\ \mathbf b_{n-1} & \mathbf R_{n-1}^* \end{bmatrix}\mathbf e_1= \overline{\lambda}\mathbf x +\sum_{k=2}^n b_{k-1} \mathbf v_k$
$\implies \mathbf 0 = \big(\overline{\lambda}-\mu\big)\mathbf x +\sum_{k=2}^n b_{k-1} \mathbf v_k$
$\implies $ all coefficients are zero by linear independence
$\implies \mu = \overline{\lambda}$

(note: this also implies, for some $j$, we have $\sigma_j = \vert \lambda\vert$ i.e. the modulus of $\lambda$ is a singular value for $A$)