Square matrix: implicit choice of the basis

77 Views Asked by At

I have encountered the discussion about similar matrices in Apostol calculus 2, and how eigenvalues are defined for the linear transformation $T: V \rightarrow V$ itself, and not for its matrix representation (call it $A = m(T)$). Thus, it holds that all representations of $T$ have the same eigenvalues, but the form of the matrix $A$ changes depending on a basis chosen for $V$.

Now I am a little bit confused. When Apostol derives the results, if need be, he always states the bases of the matrices. Moreover, in all the theorems, it is implicitly assumed that if $T: V \rightarrow W$, and $V = W$, then the selected basis of both $V$ and $W$ is the same.

But in real life, when I encounter matrices (in linear regression for example), what are their bases? Is it unit vectors of $V_n$? Or is it something else?

Is it customary to use the domain and co-domain of a transformation to be $V_n$ and $V_m$ IRL, or can it be viewed somehow else? After all when the matrix is given in engineering applications/statistics, nothing is stated about its domain/co-domain, or its selected bases.

Why am I supposed to think, that if I have a square matrix in one of my applications, the selected base for its range $T(V) \subseteq W$ is the same as the selected basis of its domain $V$? For example, why do we think that the identity matrix \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} represents an identity transformation, when the identity could easily be \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} if we just had a basis in $\{(1, 0), (0, 1)\} \subset V_2$, and a basis $\{(0, 1), (1, 0)\} \subset W_2$ for $I:V_2 \rightarrow W_2, V_2 = W_2$?

1

There are 1 best solutions below

6
On

I think here is enough to show that once you defined the associated matrix to an endomorphism $T$ respect to a choosen basis $\mathcal{B}$ of $V$,and the determinant of the endomorphism $\mbox{det}(T) := \mbox{Det}(M_{\mathcal{B}}(T)$) we convince ourselves that the characteristical polynomial of the endomorphism, defined as $p_{T}(t) := \mbox{Det}(M_{\mathcal{B}}(T)-tI_{d})$ is well defined; Once said that, if the characteristical polynomial doesn't change for change of basis (sorry about that), the eigenvalues, which are the roots of $p_{T}(t)$ won't change as well in other basis, and they will be well define as roots of $p_{T}(t)$ in every basis of $V$.

Now, what happens if we take $\mathcal{B'} \ne \mathcal{B}$ as a basis ? How $M_{\mathcal{B'}}(T)$ is related with $M_{\mathcal{B}}(T)$ ? You can prove that given the coordinates with respect to a basis it exists a unique invertible matrix $P$ (Hence, associated to an isomorphism) such that $[v]_{\mathcal{B}} = P[v]_{\mathcal{B'}}$ (where $[v]_{\mathcal{B}}$ denotes the coordinates of $v \in V$, respect to the basis $\mathcal{B}$). So if we remember of the change of basis matrix was built we have that $[]_{\mathcal{B}}^{-1} \circ M_{\mathcal{B}} \circ []_{\mathcal{B}} = (P\circ[]_{\mathcal{B'}})^{-1} \circ M_{\mathcal{B}} \circ (P\circ[]_{\mathcal{B'}})= []_{\mathcal{B'}}^{-1} P^{-1}\circ M_{\mathcal{B}} \circ P \circ []_{\mathcal{B'}} = []_{\mathcal{B'}}^{-1} M_{\mathcal{B'}} \circ []_{\mathcal{B'}}$

So what we discover whow $M_{\mathcal{B'}}$ changes in a new basis $\mathcal{B'}$, precisely as $M_{\mathcal{B}} = P^{-1}\circ M_{\mathcal{B'}} \circ P$.

Once notice this if we go back to our problem, in the new basis $\mathcal{B'}$, we have $$P_{T,\mathcal{B'}}(t) = \mbox{Det}(M_{\mathcal{B'}}(T)-tI_{d}) = \mbox{Det}(P^{-1}M_{\mathcal{B}}(T)P-tI_{d}) = \mbox{Det}(P^{-1}M_{\mathcal{B}}(T)P-tP^{-1}I_{d}P) = \mbox{Det}(P^{-1}(M_{\mathcal{B}}(T)-tI_{d})P) = \mbox{Det}(P^{-1})\mbox{Det}(M_{\mathcal{B}}(T)-tI_{d}) \mbox{Det}(P)$$ $$=\mbox{Det}(P)^{-1}\mbox{Det}(M_{\mathcal{B}}(T)-tI_{d}) \mbox{Det}(P) = \mbox{Det}(M_{\mathcal{B}}(T)) = P_{T,\mathcal{B}}(t)$$