Unable to understand the proof of two isomorphic finite-dimensional vector spaces having the same dimension

5.2k Views Asked by At

Theorem: Two finite-dimensional vector spaces are isomorphic if and only if they have the same dimension.

I can understand how to prove that if they are isomorphic then they have the same dimension. Yet for the other direction I cannot totally understand.

To quote Axler's Linear Algebra Done Right, 2nd edition page 55:

To prove the other direction, suppose $V$ and $W$ are finite-dimensional vector spaces with the same dimension. Let $(v_1,...,v_n)$ be a basis of $V$ and $(w_1,...w_n)$ be a basis of $W$. Let $T$ be the linear map from $V$ to $W$ defined by $$T(a_1v_1+...+a_nv_n)=a_1w_1+...+a_nw_n \ (*)$$ Then $T$ is surjective because $(w_1,...,w_n)$ spans $W$, and $T$ is injective because $(w_1,...,w_n)$ is linearly independent. Because $T$ is injective and surjective, $T$ is invertible.

I cannot understand how on earth can we define $T$ that satisfy (*) above. I don't think Axler has given a proof that this can be defined. Could somebody help me on this please?

5

There are 5 best solutions below

3
On BEST ANSWER

As he mentions in the definition you can always choose two bases for the vectors spaces $V, W$ respectively. Then define $T$ by $$T : v_i \mapsto w_i$$ and $T$ is linear. That gives the relationship in $(*)$.

0
On

You could also say that if $v \in V$, then $T(v)$ is defined to be the vector $a_1 w_1 + \cdots + a_n w_n$, where $(a_1,\ldots,a_n)$ is the unique $n$-tuple of scalars such that \begin{equation} v = a_1 v_1 + \cdots + a_n v_n. \end{equation}

The key point is that there is only one way to write $v$ as a linear combination of the vectors $v_1,\ldots, v_n$. The coefficients in that linear combination are unique. So, given $v$, the output vector $a_1 w_1 + \cdots + a_n w_n$ is perfectly well-defined.

0
On

A linear map $T:V\to W$ is completely determined by how it acts on the basis of $V$, because any $v \in V$ can be expressed uniquely as a linear combination of the basis: $v = a_1v_1+\cdots+a_nv_n$.

I'll try to explain the construction (Axler explains it on pages 39-40).

First, we specify how $T$ acts on the basis by letting $Tv_i=w_i$ for all $i \in \{1,\dots, n\}$.

Then, because we want the map to be linear, we define $T$ as $$T(v)=T(a_1v_1+\cdots+a_nv_n)= a_1w_1+\cdots+a_nw_n.$$

As @LittleO's answer explains, the fact that the representation of $v$ is unique is what makes this a well-defined map.

To show that this map is indeed linear, let $c$ be a constant and let$v=a_1v_1+\cdots+a_nv_b$ and $u=b_1v_1+\cdots+b_nv_n$ be vectors in $V$.

Then \begin{align} T(v+u)&=T((a_1+b_1)v_1+\cdots+(a_n+b_n)v_n)\\ &=(a_1+b_1)w_1+\cdots+(a_n+b_n)w_n \\ &= (a_1w_1+\cdots+a_nw_n)+(b_1w_1+\cdots b_nw_n)\\ &= Tv+Tu\end{align}

And \begin{align} T(cv)&=T(c(a_1v_1+\cdots a_nv_n))\\ &=T((ca_1)v_a+\cdots +(ca_n)v_n)\\ &=(ca_1)w_1+\cdots + (ca_n)w_n\\ &=c(a_1w_1+\cdots a_nw_n)\\ &=cTv \end{align}

0
On

You must understand that a linear map from one vector space $V$, with basis $\mathcal B$ , to another vector space $W$ is entirely characterised by the image in $W$ of the vectors of $\mathcal B$. If the basis is finite and has $n$ vectors (i.e. if $\dim V=n$) this can be written as $$L(V,W)\simeq W^n.$$ This is even the reason why, for finite dimensional vector spaces, once bases have been chosen in $V$ and $W$, we can identify linear maps with matrices.

0
On

In the comments, you write:

That still puzzles me actually. I'm thinking about explicit formula for Linear mapping. How can such a formula be defined that any arbitrary element in one vector space can be mapped to another arbitrary element in the target space?

The formula $(*)$ you quote directly gives a definition of a map $T$ from $V$ to $W$ for any linear combination of the basis vectors $(v_1, \dotsc, v_n)$.

Since the vectors $(v_1, \dotsc, v_n)$ form a basis of $V$, then, by definition, every vector in $V$ can be (uniquely) written as a linear combination of them, so $(*)$ actually defines the map $T$ for all vectors in $V$.

The remaining question then is, is the map $T$ linear? To check that, go back to the definitions of linearity. Basically, you need to show that $T(u+v) = T(u) + T(v)$ and $T(\alpha v) = \alpha T(v)$ for all vectors $u,v \in V$ and all scalars $\alpha$. Can you do that?