Implications for Matrices Based on their Eigenvalues

40 Views Asked by At

So I'm currently taking my first linear algebra course, and recently we've been working with inverses, determinants and cofactor expansion, and diagonalization.

I haven't had any trouble working out computational problems, but some of the conceptual exercises on my diagonalization homework have been giving me some trouble. For reference, I've included problems below:

3.3.13. If A is diagonalizable and 1 and -1 are the only eigenvalues, show that (A^-1) = A.

3.3.14. If A is diagonalizable and 0 and 1 are the only eigenvalues, show that (A^2) = A. 

I was able to figure out 3.3.14 fairly easily after realizing: $$ D = diag(0,1) $$ $$ D^2 = diag(0^2, 1^2) = diag(0,1) = D $$ $$ A = PDP^{-1} \Rightarrow A^2 = PD^2P^{-1} = PDP^{-1} = A$$

However, I was unable to solve 3.3.13 without doing some really messy math paired with trial-and-error, which is obviously insufficient for proofs. My feeling is that there must be some implications for a matrix A based on its eigenvalues, namely for values like -1, 0, and 1.

Is this true? And if so, how could I use these implications to work out 3.3.13?

1

There are 1 best solutions below

0
On

If A is a diagonalizable matrix, whose only eigenvalues are $\pm1$, then there is an invertible matrix $P$ such that $$A=PDP^{-1}$$ where $$D=\text{diag}(d_1,d_2\dots,d_n)$$ and $d_k=\pm1,\ k=1,2,\dots,n$. Then $$D^2=\text{diag}(d_1^2,d_2^2\dots,d_n^2)=I$$ so that $$A^2 =PD^2P^{-1}=PIP^{-1}=I$$ so $$A^{-1}=A$$