Given a $2\times 2$ matrix $A$, does two unique eigenvalues guarantee that $A$ is diagonizable?

127 Views Asked by At

This question relates to a seminar I've been working on, so I do not wish to disclose the whole question but simply ask how this would be handeled in theory.

$A = \begin{bmatrix}a & b \\ c & d \end{bmatrix}$ $Q=(\lambda I - A) = \begin{bmatrix}\lambda - a & -b \\ -c & \lambda - d \end{bmatrix}$

We then want to find the eigenvalues of $A$, which responds to solving $\det(Q)=0$. My question is, given that we would get two unique eigenvalues, does that mean we are guaranteed that matrix $A$ is diagonalizable? I have only found a theorem that says if we have two distinct eigenvectors for a $2\times 2$ matrix $A$, then $A$ is diagonalizable...

Help would be much appreciated!

2

There are 2 best solutions below

1
On BEST ANSWER

If you have $2$ unique eigenvalues, it means your characteristic polynomial will look like this: $(λ-a)(λ-b)$, where $a$ and $b$ are your eigenvalues.
Now a matrix is diagonizable if for each of its eigenvalues the algebraic multiplicity is equal to the geometric multiplicity.
In our case you have an algebraic multiplicity of $1$ (for each eigenvalue), and thus their geometric multiplicity is also equal to $1$ ($0<\text{geometric multiplicity} \leq \text{algebraic multiplicity}$) thus the matrix is diagonizable.
So for a general $n\times n$ matrix if you have $n$ unique eigenvalues, it's diagonizable.

0
On

If $A$ is an $n \times n$ matrix the eigenvalues of which are distinct, there exist non-zero vectors $V_i$, $1 \le i \le n$, with

$AV_i = \mu_i V_i \tag 1$

the $\mu_i$ being the distinct eigenvalues of $A$. It is well-known that the eigenvectors associated with distinct eigenvalues are linearly independent; thus the matrix

$S = [V_1 \; V_2 \; \ldots \; V_n ] \tag 2$

is non-singular and hence invertible, so there exists an $n \times n$ matrix $S^{-1}$ with

$S^{-1}S = SS^{-1} = I; \tag 3$

also,

$AS = [AV_1 \; AV_2 \; \ldots \; AV_n ] = [\mu_1 V_1 \; \mu_2 V_2 \; \ldots \; \mu_n V_n]; \tag 4$

thus

$S^{-1}AS = S^{-1} [\mu_1 V_1 \; \mu_2 V_2 \; \ldots \; \mu_n V_n] = [\mu_1 S^{-1} V_1 \; \mu_2 S^{-1} V_2 \; \ldots \; \mu_n S^{-1} V_n]; \tag 5$

now in accord with (2) and (3),

$S^{-1}S = S^{-1} [V_1 \; V_2 \; \ldots \; V_n ] = [S^{-1} V_1 \; S^{-1} V_2 \; \ldots \; S^{-1} V_n ] = I, \tag 6$

which shows that each $S^{-1} V_i$ is the column vector whose $i$-th entry is equal to $1$ with all other elements $0$; incorporating this observation into (5) we obtain

$S^{-1}AS = \text{diag}(\mu_1, \; \mu_2, \; \ldots, \; \mu_n), \tag 7$

and we thus find that $A$ is diagonalized by $S$. $OE \Delta$.