Similar matrix intuition clarification

310 Views Asked by At

I've been studying similar matrices (diagonalisation and JCF) and I want to check if my intuition is sound.

So take transformation $T: V \rightarrow V$ and take two bases $E$ and $F$ of $V$. Let $A$ be the transformation matrix for $E$ to itself and $B$ be the transformation matrix from $F$ to itself. By this I mean we apply the transformation on each of the basis vectors in $E$ and then write this out in terms of basis vectors in $E$. It makes sense to me that there must be some sort of connection between $A, B$. If we let $P$ be the change of basis matrix $E \rightarrow F$ then we can write $A = PBP^{-1}$.

Okay so my understanding is that because there are infinitely many bases for $V$, there will hence be infinitely many matrices that represent $T$. However clearly there should be some sort of relation between each of these matrices. We have the equation $A = PBP^{-1}$, so we want to find a 'nice, canonical' matrix B that we can relate every matrix that represents $T$ to.

To do this, we either need to find a specific $P$ that will cause $B$ to be 'nice', or we need to find a basis $F$ that when applied to by $T$ will result in a nice $B$.

Suppose my matrix of T is diagonalisable. Then I've 'chosen' the form of my $B$ as a diagonal matrix (suppose we know how to find the diagonal entries). But B is the transformation matrix from some $F \rightarrow F$, so I need to find this basis F. It can be proven that V has some basis of eigenvectors. Is there a reason why these bases have to be equal? I.e. is it enough to simply find any basis of eigenvectors and be content that this corresponds to the basis we are looking for, $F$?

Suppose $E$ was some arbitrary basis. Why does it follow that the basis of eigenvectors creates a matrix $P$ that is exactly the change of basis matrix from $E \rightarrow F?$ Is it because we can assume that $E$ takes the standard normal basis?

Now for matrices that can't be diagonalised, is there a reason we want it in JCF, or is simply because this is what we perceive to be the 'simplest' non diagonal matrix?

Thank you.

1

There are 1 best solutions below

2
On

For your first question: yes. Any basis of eigenvectors will diagonalize a matrix.

Some bases will always yield the same matrix. For example, if you have a basis $F$ and multiply every vector in the basis by $2$, the matrix with respect to the new basis will be the same.

I'm not sure I understand your second question. In a sense, though, it doesn't really matter what the "standard basis" happens to be, all that matters as far as changes of basis are concerned are the relationships between bases. In particular, it suffices to find the $P$ that you describe, which plays the following role: for each vector $v = a_1e_1 + \cdots + a_ne_n$, there are constants $b_i$ such that $v = b_1f_1 + \cdots + b_n f_n$. $P$ is the matrix with the property that $$ P \pmatrix{b_1\\ \vdots \\ b_n} = \pmatrix{a_1\\ \vdots \\ a_n} $$ In order to find this $P$, we don't need to know whether $E$ is the standard basis, just how it relates to $F$.

And finally, yes: JCF is ultimately an arbitrary, but convenient choice. It happens to be easy to compute certain functions on matrices in JCF. For example: minimal/characteristic polynomials, matrix powers, matrix exponentials. A comparably useful form that is slightly less commonly used is Weyr canonical form.