Is the matrix that we get from diagonalization the only possible diagonal matrix that can be transformed from say matrix A? Assuming that A is diagonalizable? I think it is but I don't know how to prove it? The way I look at diagonalization now is that it is an algorithm but I am not sure if the matrix that we get from it is the only diagonal matrix that we can transform from A. Can someone please explain to me. Thanks.
Is the matrix that we get from diagonalization the only possible diagonal matrix that can be transformed from say matrix A?
3.2k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 4 best solutions below
On
When we "diagonalize $A$", we find a diagonal matrix $D$ that is similar to $A$. That is, we want to find some $D$ such that for some invertible $S$, $$ A = SDS^{-1} $$ Your question, if I understand it correctly, is "if $A$ is similar to diagonal matrices $D_1$ and $D_2$, is it necessarily true that $D_1 = D_2$"?
Note that the above can only happen if $D_1$ is similar to $D_2$. So, your question becomes "which diagonal matrices are similar to each other"?
The answer is that two diagonal matrices are similar exactly when they have the same diagonal entries (though not necessarily in the same order). So, for example, $$ \pmatrix{1\\&2\\&&3}, \quad \pmatrix{2\\&3\\&&1} $$ are similar to each other, and both are diagonalizations of $$ A = \pmatrix{1&5&4\\0&2&1\\0&0&3} $$
Proving that diagonal matrices are similar iff they have the same diagonal entries:
First, we note that if $A$ and $B$ are similar, then so are $A - \lambda I$ and $B - \lambda I$ for every choice of scalar $\lambda$ (verify that this is the case), and that two similar matrices must have the same rank (again, verify).
If two matrices have the same diagonal entries, we can find a similarity between them as described by user187373 (i.e. using permutation matrix similarities).
If two diagonal matrices $A$ and $B$ have differing diagonal entries, then there is some $\lambda$ appearing on the diagonal of $A$ that appears with less frequency on the diagonal of $B$. Note that the rank of a diagonal matrix is $n$ minus the number of $0$ entries on the diagonal. So, in particular, $A - \lambda I$ and $B - \lambda I$ have different ranks, and so cannot be similar. Thus, $A$ and $B$ are not similar.
A more typical approach to this second part is as follows: if $A$ and $B$ are similar, then $\det(A - \lambda I)$ and $\det(B - \lambda I)$ must be the same; that is, they must give the same polynomial on $\lambda$ (these are the "characteristic" polynomials of $A$ and $B$). It suffices to check that these polynomials are different if the diagonal entries are different, noting that the determinant of a diagonal matrix is the product of the diagonals.
On
Any matrix $A$ represents a certain transformation $T$ in the standard basis. But the same transformation $T$ will usually be represented by a different matrix if you look at $T$ with respect to another basis.
When you diagonalize $A$, what you're doing is finding another basis in which the matrix for the transformation $T$ is a diagonal one. For example, if the matrix can be diagonalized as $D = \pmatrix{2 & 0 \\ 0 & 3}$ in a basis $(f_1,f_2)$ (which means that $T(f_1) = 2f_1$ and $T(f_2) = 3f_2$), then the matrix of the same transformation in the basis $(f_2,f_1)$ is $\pmatrix{3 & 0 \\ 0 & 2}$. So the diagonalized form is not unique.
However, the diagonal elements are the roots of the characteristic polynomial of $A$ (including multiplicities), which is also the characteristic polynomial of $D$. (In our example, this polynomial must be $(T-2)(T-3)$.) Since a polynomial can be factored into linear factors $T-\lambda$ in only one way, the diagonal elements of the diagonalized form $D$ are well determined up to order. That is, any other diagonalized form will simply reorder the diagonal elements.
On
In general, a linear operator $T$ on a finite dimensional vector space $V$ will have a diagonal matrix when expressed in a basis $[b_1,\ldots,b_n]$ if and only if every $b_i$ is an eigenvector of$~T$; moreover if $\lambda_i$ denotes the corresponding eigenvalue (for $i=1,\ldots,n$; these eigenvalues are not necessarily all distinct), then $\lambda_1,\ldots,\lambda_n$ are the diagonal entries of that diagonal matrix. (In case you are unfamiliar with the term linear operator, it suffices to know that the linear operators on $\Bbb R^n$ correspond precisely to the real $n\times n$ matrices, namely they are the maps $\Bbb R^n\to\Bbb R^n$ given by $x\mapsto Ax$ for some such matrix$~A$; the diagonalisation of$~A$ is obtained by change of basis from the standard basis of$~\Bbb R^n$ to $[b_1,\ldots,b_n]$.)
From this description it is clear that from one diagonal form$~D$ of$~A$ one can always permute the diagonal entries of$~D$ to obtain different diagonal forms: it suffices to permute the vectors of the basis $[b_1,\ldots,b_n]$. They will of course still remain a basis of eigenvectors of$~T$, with the corresponding list of eigenvalues also permuted. Therefore the diagonal form is almost never unique; this only happens if all $\lambda_i$ are equal (and in this very exceptional case every basis is a basis of eigenvectors, so the original $A$ was already diagonal).
Permutation of the basis vectors are not the only changes one can make to a basis of eigenvectors to obtain another one; for instance multiplying the individual basis vectors by nonzero scalars is also allowed. However, it is true that permutation of diagonal entries is the only liberty one has in choosing the diagonalised form of a matrix. To see this note that the set of all eigenvectors for$~T$ can be easily found relative to the basis $[b_1,\ldots,b_n]$ of eigenvectors. The eigenvectors for some $\lambda$ are the nonzero vectors annihilated by $T-\lambda\mathrm{id}$, whose matrix is $D-\lambda I$. So there are no such vectors unless $\lambda$ occurs among $\lambda_1,\ldots,\lambda_n$, in which case they are all possible nonzero linear combinations for those $b_i$ with $\lambda_i=\lambda$ (often there is only one such$~b_i$, but there might be more). Thus the dimension of the eigenspace for$~\lambda$ is equal to the number of times$~\lambda$ occurs in the list $\lambda_1,\ldots,\lambda_n$; since this dimension does not depend on which basis of eigenvectors we started out with, this multiplicity cannot depend on it either. This proves that permutation of the diagaonl entries $\lambda_1,\ldots,\lambda_n$ is all the freedom one has as for as the diagonal form of a diagonalisable matrix is concerned.
We also see that the sum of the dimensions of all eigenspaces is precisely $n$ (remember we are assuming the diagonalisable case). Also any basis consisting of eigenvectors is obtained by choosing a basis in each separate eigenspace, and combining these (the result is a basis of $V$). These matters are related to the fundamental fact that eigenspaces for different eigenvalues always form a direct sum, and that $T$ is diagonalisable if and only if the sum of all eigenspaces fills up the whole space$~V$.
If a matrix is diagonalizable, then its Jordan Canonical Form consists of $1\times 1$ blocks. These are unique up to order. That is, the diagonal entries are uniquely determined in their values, but may be arranged in any order.