Canonical Basis and Transformations in Linear Algebra

145 Views Asked by At

I know that it's possible to build the matrix of a linear transformation knowing only where the canonical basis $\begin{bmatrix} 1 & 0 \end{bmatrix} , \begin{bmatrix} 0 & 1 \end{bmatrix}$ is sent by the transformation. I used this property to build for example a reflection matrix.

What i don't understand is why (formally and mathematically ) this property can be exploited in Linear Algebra and which are the hidden consequences of this possibility. I would be happy to consult a precise exposition of this fact. Can you help me ?

1

There are 1 best solutions below

3
On BEST ANSWER

The more general statement is the following.

Let $V$ and $W$ be $\mathbb{F}$-vector spaces and $T:V\rightarrow W$ a linear map. Let $\alpha=\left\{v_i\mid i\in I\right\}$ be a basis of $V$. Then $T$ is completely determined by knowing $T(v_i)$ for every $v_i\in \alpha$.

Proof: Let $v\in V$. Since $\alpha$ is a basis, we can write $v$ uniquely as a finite linear combination of the basis elements, i.e. $v=\sum_{i\in I}\lambda_iv_i$ for unique $\lambda_i\in \mathbb{F}$ and only finitely many $\lambda_i$ are nonzero. Then \begin{eqnarray} T(v) &=& T(\sum_{i\in I}\lambda_iv_i)\\ &=& \sum_{i\in I}\lambda_iT(v_i). \end{eqnarray}$\square$

To say this a bit differently, suppose that $V$ is a vector space of dimension $n$, then we only need to know $T$ on $n$ independent vectors to completely known what $T$ is doing. Aren't linear maps much easier than general functions? :)

There are many consequences to this fact. The most important is that you can choose which basis might help you in a certain problem. Let me give you an example:

Suppose that $S$ is the reflection about the plane $W$ generated by $(1,0,0)$ and $(0,1,0)$ in the direction of the line $U$ generated by $(1,1,1)$. Determine $S(x,y,z)$.

This might be an annoying problem untill you realize that $S$ is linear and it suffices to know $S$ on a basis. Notice that $S(w)=w$ for all $w\in W$ and $S(u)=-u$ for all $u\in U$. Then take a basis of $\mathbb{R}^3$ consisting of a basis of $W$ and $U$, then we know $S$ completely. To find $S(x,y,z)$ we only need to write $(x,y,z)$ w.r.t. the basis that we choose.

Moreover, the basis chosen above is an example of an eigenbasis which leads you to diagonalization of linear maps. In short, almost everything you do in linear algebra starts from the realization that linear maps are determined by their action on a basis. The fact that linear maps correspond to matrices (after a choice of bases and in the finite-dimensional setting) is an immediate consequence as well.