Diagonalisation with free parameter.

148 Views Asked by At

I have this matrix: $$A = \begin{pmatrix} 1 & -\alpha & \alpha \\ \alpha & \alpha -1 & 2 \\ \alpha & -2 & \alpha +3\end{pmatrix}$$

And I want to know for which values of $\alpha$ it is diagonalisable, but I have been having a hard time trying to solve the exercise. My initial guess has been that, since the trace of $A$ is $2\alpha + 3$, then the sum of the eigenvalues of $A$ must equal that, and therefore two eigenvalues are $\alpha$ (algebraic multiplicity of $2$) and the other eigenvalue is $3$ (algebraic multiplicity of 1). Since the geometric and the algebraic multiplicities have to be the same of $A$ to be diagonalisable, I have started trying to calculate the geometric multiplicities, but I have not been able to follow the discussion of the system. For example, the geometric multiplicity of $\alpha$, $d_\alpha$, would be given by: $$d_\alpha = 3 - \operatorname{rank}\begin{pmatrix} 1-\alpha & -\alpha & \alpha \\ \alpha & -1 & 2 \\ \alpha & -2 & 3\end{pmatrix}$$ But I don't see anything useful there, besides the fact that I can't think of a value of $\alpha$ that would make the rank of the matrix $1$. How could I solve this? Thanks.

3

There are 3 best solutions below

0
On

The eigenvalues of $A$ are $1$ (it's a simple root of the characteristic polynomial of $A$) and $1+\alpha$ (it's a double root). So, your matrix is diagonalizable if and only if the eigenspace corresponding to the eigenvalue $1+\alpha$ is $2$-dimensional. Bue\begin{align}A.\begin{bmatrix}x\\y\\z\end{bmatrix}=(1+\alpha)\begin{bmatrix}x\\y\\z\end{bmatrix}&\iff\left\{\begin{array}{l}-\alpha x-\alpha y+\alpha z=0\\\alpha x-2y+2z=0\\\alpha x-2y+2z=0\end{array}\right.\\&\iff\left\{\begin{array}{l}-\alpha x-\alpha y+\alpha z=0\\\alpha x-2y+2z=0.\end{array}\right.\end{align}So, if $\alpha=0$ you just have the space $\bigl\{(x,y,z)\in\mathbb R^3\mid-y+z=0\bigr\}$, which is indeed $2$-dimensional. If $\alpha=-2$, you just have the space $\bigl\{(x,y,z)\in\mathbb R^3\mid x+y-z=0\bigr\}$, which is also $2$-dimensional. In all other cases, the equations are linearly independent and the eigenspace corresponding to the eigenvalue $1+\alpha$ is $1$-dimensional.

So, your matrix is diagonalizable if and only if $\alpha=0$ or $\alpha=-2$.

0
On

You can perform row and column reduction to find the eigenvalues: \begin{align} \chi_A(\lambda)&=\begin{vmatrix} 1-\lambda & -\alpha & \alpha \\ \alpha & \alpha-1-\lambda & 2 \\ \alpha & -2 & \alpha+3-\lambda \end{vmatrix} = \begin{vmatrix} 1-\lambda & 0 & \alpha \\ \alpha & \alpha+1-\lambda & 2 \\ \alpha & \alpha+1-\lambda & \alpha+3-\lambda \end{vmatrix} \\[2ex] &= \begin{vmatrix} 1-\lambda & 0 & \alpha \\ \alpha & \alpha+1-\lambda & 2 \\ 0 & 0 & \alpha+1-\lambda \end{vmatrix} = (\alpha+1-\lambda)^2\begin{vmatrix} 1-\lambda & 0 & \alpha \\ \alpha & 1 & 2 \\ 0 & 0 & 1 \end{vmatrix} \\[1ex] &= (\color{red}{\alpha+1}-\lambda)^2 (\color{red}1-\lambda) \end{align}

To determine whether the matrix is diagonalisable, you simply have to determine whether the geometric multiplicity of the eigenvalue $\alpha+1$ is equal to $2$, i.e. whether the matrix $$A-(\alpha+1)I=\begin{bmatrix} -\alpha & -\alpha & \alpha \\ \alpha & -2 & 2 \\ \alpha & -2 & 2 \end{bmatrix}$$ has rank $2$. It is obvious it has rank $2$ if $\alpha\le 0$, hence is diagonalisable as $\scriptstyle\begin{bmatrix}\alpha+1 & 0 & 0 \\ 0 & \alpha+1 & 0 \\0 & 0 & 1 \end{bmatrix}$, and rank $1$ if $\alpha=0$, with Jordan normal form $\;\scriptstyle\begin{bmatrix}1 & 1 & 0 \\ 0 & 1 & 0 \\0 & 0 & 1 \end{bmatrix}$.

0
On

You can’t really determine the eigenvalues of a matrix by examining only its trace. Without other information, any partition of it is plausible: $1$, $2$ and $2\alpha$ is just as good a guess as the one you made. It turns out that $\alpha$ is not an eigenvalue of this matrix, so you’re going to have trouble finding a nontrivial kernel for $A-\alpha I$ at all.

You mentioned that you were under exam time pressure. For artificially-constructed problems like this one, it’s often a time-saver to look for eigenvectors first by examining simple linear combinations of rows and columns. In this case, the row sums are all equal to $1$, therefore $(1,1,1)^T$ is an eigenvector with eigenvalue $1$. Similarly, the sum of the second and third columns is $(0,\alpha+1,\alpha+1)^T$, so $(0,1,1)^T$ is also an eigenvector with eigenvalue $\alpha+1$. Now you can use the trace to determine that the last eigenvalue is also $\alpha+1$.

So, you now just need to check the rank of $A-(\alpha+1)I$, which is $$\begin{bmatrix}-\alpha&-\alpha&\alpha\\\alpha&-2&2\\\alpha&-2&2\end{bmatrix}.$$ You can pretty much find the values of $\alpha$ for which is is rank one by inspection: $\alpha=0$ is pretty easy, as it makes a zero row and two identical rows, and $\alpha=-2$ isn’t very hard to spot, either, by adding the first row to the second. You can check that these are the only possible values by examining the $2\times2$ minors of this matrix. Because of all the redundancy in the matrix, one can quickly see that they all either vanish identically, or are equal to $\pm\alpha(\alpha+2)$, which vanishes at exactly the two values already found by inspection.

Another possibility is to use that fact that a matrix is diagonalizable iff its minimal polynomial is a product of distinct linear factors. In this case, this involves computing $(A-I)(A-(\alpha+1)I)$, which isn’t a lot of work given the simple structure of the two matrices. The resulting entries are either $0$ or $\alpha(\alpha+2)$, leading to the same answer as before.