Proving invertability through characteristic polynomial

97 Views Asked by At

For any eigenvalue, $\lambda_i$, of a matrix $A$, $\det(\lambda_i I - A) = 0$. Therefore, I would be inclined to assume that for some non-eigenvalue, $l$, it holds that $\det(l I - A) \neq 0$, but I somehow have this feeling that I might be taking a shortcut that is not allowed.

Concretely, I have a matrix $A$ with eigenvalues strictly less than one. Does this imply that $\det(I - A) \neq 0$, hence the matrix $I - A$ is invertible.

Maybe I am just being paranoid, but I have this feeling that I am overseeing something. Are there any conditions for this to hold or is this fine the way it is?

2

There are 2 best solutions below

2
On BEST ANSWER

I would start by considering the question: what is an eigenvalue?

In the finite-dimensional case, let $T:V\rightarrow V$ be a linear operator where $\dim V = n < \infty$.

We say that $\lambda$ is an eigenvalue of $T$ iff $Tv = \lambda v$ for some $v\neq 0$. This is the same as requiring that $(T - \lambda I_{V})v = 0$ for some $v\neq 0$, which by its turn means the linear operator $T - \lambda I_{V}$ is not injective.

According to the rank-nullity theorem, this means that $T - \lambda I_{V}$ must not be bijective:

\begin{align*} \dim V = \dim\ker(T - \lambda I) + \dim\text{Im}(T - \lambda I) \end{align*}

Indeed, if $T - \lambda I$ is injective, then $\ker(T-\lambda I) = \{0\}$ and $\dim\ker(T-\lambda I) = 0$, whence we conclude that $\dim V = \dim\text{Im}(T-\lambda I)$. Since $\text{Im}(T-\lambda I)$ is a subspace of $V$ and has the same dimension of $V$ in the case when $T - \lambda I$ is injective, we conclude that $T - \lambda I$ is bijective.

Conversely, if $T-\lambda I$ is bijective, then it is injective.

Thus, if we choose an arbitrary basis $\mathcal{B}_{V}$ for $V$, such restrictions correspond to the matrix $[T]_{\mathcal{B}_{V}} - \lambda I$ not be invertible. And this happens iff $\det([T]_{\mathcal{B}_{V}} - \lambda I) = 0$. The choice of the basis is irrelevant, since any pair $[T]_{\mathcal{B}}$ and $[T]_{\mathcal{B}'}$ of matricial representations of $T$ are similar. More precisely, one has that \begin{align*} [T]_{\mathcal{B}'} = [I]_{\mathcal{B}}^{\mathcal{B}'}[T]_{\mathcal{B}}^{\mathcal{B}}[I]_{\mathcal{B}'}^{\mathcal{B}} \end{align*}

Given a square matrix $A\in M_{n\times n}(\textbf{R})$, due to the biunivocal correspondence $A \to L_{A}$ where $L_{A}$ is the linear operator $L_{A}:\textbf{R}^{n}\to\textbf{R}^{n}$ defined by $L_{A}v = Av$, $[L_{A}]_{\mathcal{B}} = A$ and $\mathcal{B} = \{e_{1},e_{2},\ldots,e_{n}\}$ stands for the standard basis of $\textbf{R}^{n}$, the same concept of eigenvalue is valid for matrices as well.

Hopefully this helps.

0
On

Your way is fine, just look the following equivalent statements: \begin{align} (1) \quad & \det(A-\lambda I) = 0 \quad \Rightarrow \quad \textrm{$\lambda$ is an eigenvalue of $A$} \\ (2) \quad & \textrm{$\lambda$ is not an eigenvalue of $A$} \quad \Rightarrow \quad \det(A-\lambda I) \neq 0. \end{align}