Linear algebra theorem and connection between problem questions

112 Views Asked by At

Theorem: Let $V$ and $W$ be finite-dimensional vector spaces. Let $\{v_1,...,v_n\}$ be basis for the vector space $V$. Then for any $w_1,...w_n$ there exists unique linear map $T: V \to W$ such that $T(v_i) = w_i$ for each $i=1,...n$.

I know how to prove the theorem and all but I was working on a several linear algebra questions something that goes along the lines of "there exists a linear transformation..." many people suggested online that find a basis or something? And realized that maybe this theorem has to do with finding that there exists a linear transformation. Before I worked on those sort of questions I just ignored what the theorem really means thinking in my head that it isn't that important... But it seems like it's coming back to me.

So is this theorem somewhat connecting the two concepts of basis and linear transformation? It seems like this theorem really helps with showing that there exists a linear transformation but I'm not sure how it does.

And any help in explaining what this theorem really helps with would be appreciated!

Thank you.

Edit #1: The motivation behind the clarification on the theorem was the following question: Prove that there exists a linear transformation $T: \mathbb{R^2} \to \mathbb{R^3}$ such that $T(1,1)=(1,0,2)$ and $T(2,3)=(1,−1,4)$.

And the HINT given for the question was: Show that $\{(1,1),(2,3)\}$ is a basis of $\mathbb{R^2}$.

And I'm not sure why showing something is a basis for whichever shows that there exists a linear transformation which is why I connected the thoughts of the theorem and basis together.

Hope it helps in which direction as to where I'm confused to.

2

There are 2 best solutions below

2
On

The theorem states that, with reference to any given basis $\{v_1,v_2,...,v_n\}$ for $V$, once we define $T(v_i)=w_i$ for any $v_i$, the linear transformation is uniquely determined (even if we change basis). Note that if we change basis the matrix representation for the linear transformation changes but the linear transformation does not.

0
On

The theorem shows that, given any vector space $V$ equipped with a basis, $B= \{e_1,\ldots,e_n\}$, giving a linear map $\alpha\colon V \to W$ for any vector space $W$, is equivalent to specifying $\alpha$ only on the basis $B$, and any function $f\colon B \to W$ extends uniquely to a function $\alpha_f\colon V \to W$.

To show that $\alpha$ is uniquely determined by $\alpha_{|B}\colon B \to W$, its restriction to $B$, it is enough for $B$ to be a spanning set, that is, for any $v\in V$ we may write $v=\sum_{i=1}^n \lambda_ie_i$ for some scalars $\lambda_1,\ldots,\lambda_n$. This is because, in that case, by the linearity of $\alpha$ we have $\alpha(v) = \sum_{i=1}^n \lambda_i\alpha(e_i)$.

The second part of the theorem, which says any function $f\colon B \to W$ extends to a linear map $\alpha_f \colon V \to W$ needs the fact that $B$ is linearly independent: If we pick $n$ vectors $\{w_1,\ldots,w_n\} \in W$, then any linear map $\beta\colon V\to W$ which agrees with $f$ on the basis $B$ must, for $v =\sum_{i=1}^n \lambda_ie_i$, have $\alpha(v) = \sum_{i=1}^n \lambda_i\alpha(e_i) = \sum_{i=1}^n \lambda_if(w_i)$.

On the other hand, the assignment $v \mapsto \sum_{i=1}^n \lambda_iw_i$ defines a linear map $\alpha_f\colon V \to W$ provided the scalars $\lambda_1,\ldots,\lambda_n$ are uniquely determined by $v \in V$. To see this, notice that if $\sum_{i=1}^n \lambda_i e_i = \sum_{i=1}^n \mu_ie_i$ then $\sum_{i=1}^n (\lambda_i-\mu_i)e_i=0$ and hence linear independence guarantees $\lambda_i = \mu_i$ for all $i \in \{1,2,\ldots,n\}$. Once we know that the scalars $\lambda_1,\lambda_2,\lambda_n$ are uniquely determined by $v \in V$, we may view them as functions $\lambda_i \colon V \to \mathbb R$, and, again using uniqueness, it is easy to check that, for each $i \in \{1,2,\ldots,n\}$, the function $v\mapsto \lambda_i(v)$ is linear, and from that it follows immediately that $\alpha_f(v) = \sum_{i=1}^n \lambda_i(v)w_i$ is a linear map.

Notice also that the theorem is the reason you can represent a linear map $\alpha\colon V \to W$ by a matrix: once you pick a basis $B = \{e_1,\ldots,e_n\}$ of $V$, the linear map $\alpha$ is uniquely determined by the $n$ vectors $\{\alpha(e_i): 1\leq i \leq n\}$. If you then pick a basis $B' = \{f_1,\ldots,f_m\}$ of $W$ you can record the vector $\alpha(e_i)$ by the column vector $(\mu^j_1,\ldots,\mu^j_m)^t$ where $\alpha(e_j) = \sum_{i=1}^m \mu^j_iw_i$ ($1\leq i\leq m$, $1\leq j\leq n$). This encodes the linear map (once you know the bases $B$ and $B'$) in the matrix $(\mu^j_i)_{1\leq j\leq n,1\leq i \leq m}$