Eigenvalues and the Characteristic Equation

4.2k Views Asked by At

Given the following matrix,

$$ A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} $$

assuming eigenvectors exist for $A$, they can be found by first solving for $\lambda$ (i.e. the roots of the equation) in the characteristic equation:

$$ \text{det}(A-\lambda I) = 0 $$

I know that if the determinant of a matrix is equal to zero, the matrix is non-invertible; also, I know that, for a given a matrix $A$, eigenvector $x$ and eigenvalue $\lambda$, $Ax = \lambda x$; hence, with respect to $x$, $A$ is somewhat "equivalent" to $\lambda$, but I'm not entirely sure why solving for the characteristic equation provides the eigenvalues for the matrix $A$.

Given this, my question is: could somebody provide some logic on why the above works?

Also, as an aside, assuming the correct eigenvalues have been found, solving for the system,

$$ \begin{bmatrix} a_{11}-\lambda & a_{12} \\ a_{21} & a_{22}-\lambda \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} = \vec{0} $$

will provide the associated eigenvector for a given $\lambda$.

Is it correct to assume the reasoning behind this is because of the following.

Firstly,

$$ \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} = \lambda \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} $$

This implies,

$$ \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} - \begin{bmatrix} \lambda & 0 \\ 0 & \lambda \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} = \left( \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} - \begin{bmatrix} \lambda & 0 \\ 0 & \lambda \end{bmatrix} \right) \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} = \vec{0} $$

Thus,

$$ \begin{bmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} = \vec{0} $$

4

There are 4 best solutions below

1
On BEST ANSWER

You're nearly there.

A square matrix $B$ is non-invertible if and only if there exists a non-zero vector $v$ such that $Bv=0$. For example, necessity follows because $Bv=0$ implies that the function $f(v)=Bv$ is not injective (or "one-to-one") and hence not bijective (which is necessary for $f$ to have an inverse -- see the link). It is easy to see that $B$ is not injective since $B(2v)=2Bv=2\cdot0=0=Bv$, that is, $f$ maps both $v$ and $2v$ onto the same point $0$.

So, if $\lambda$ is an eigenvalue of $A$, and $x$ is its corresponding eigenvector,

$$Ax=\lambda x\Leftrightarrow Ax-\lambda x=0\Leftrightarrow (A-I\lambda)x=0.$$

Hence, $\lambda$ must be such that $B=A-I\lambda$ is non-invertible. Thus $\lambda$ is an eigenvalue of $A$ if and only if it satisfies the characteristic equation $\det(A-I\lambda)=0$.

Aside: If $\lambda$ is real, $x$ is simply a vector that function $f(v)=Av$ maps onto "itself" just stretches it and/or reflects it across the origin. For example, if $\lambda=2$, $f(x)$ simply "stretches" $x$ by two, and if $\lambda=-1$, $f(x)$ reflects $x$ across the origin (rotates it by $180$ degrees).

Edit: You might find these cam-casts of interest.

0
On

Finding an eigenpair $(\lambda, x)$ involves finding the eigenvalue $\lambda$ and its eigenvector $x$, such that \begin{gather} Ax = \lambda x. \end{gather}

This is equivalent to \begin{gather} Ax- \lambda x = 0. \end{gather}

This system can only have a nonzero solution $x$, if the matrix $(A-\lambda I)$ ($I$ being the identity) is singular. And one criterion for being singular is that its determinant is zero.

0
On

Your reasoning is correct; $\lambda$ is an eigenvalue of square matrix $A$ by definition iff there exists a nonzero vector $x$ s.t. $(A-\lambda I)x = 0$. This is equivalent to $(A-\lambda I)$ being singular (non-invertible), and to $\det(A-\lambda I) = 0$.

So if you solve for the polynomial roots of the "characteristic equation" $\det(A-\lambda I) = 0$, those values will be exactly the eigenvalues of $A$.

0
On

you want to find solutions ,$\lambda$,such that : $Ax=\lambda x$ where $x\ne0$ or equivalently $Ax-\lambda x=(A-\lambda I)x=0$ in order to have solutions for this equation the expression $|A-\lambda I|$ must equal zero because if the $A-\lambda I$ was invertable then we have only one solution which is zero vector and by the definition of the problem this not valid so the matrix $A-\lambda I$ must be non-invertable and it is non invertable if and only if $|A-\lambda I|=0$ . so the problem of finding eigenvalues is reducible to the problem of finding roots of polynomial . then we want to find the span of all vectors that satisfies the equations , so you must find a set of independent vectors that for all $\lambda$ the set is a space of all possible solutions $x$