Linear Transformation, zero vector mentioned explicitly

248 Views Asked by At

Let T be a linear transformation from a vector space U(F) into a vector space V(F). Then T(0) = 0 where 0 on the left hand side is zero vector of U and 0 on the right hand side is zero vector of V.

Why is there a explicit mention about different zero vectors ? How can two vector spaces can have two different definitions for zero vector ?

2

There are 2 best solutions below

0
On BEST ANSWER

Because a vector space is an abstract object, the elements of which can be anything. Just to give an example: the zero of $\mathbb{R}$ is not an element of $\mathbb{R}^{\mathbb{N}}$.

0
On

The zero vector is just an element of the vector space that obeys certain conditions in relation to the addition and scalar multiplication operations.

To make this very obvious, take $\mathbb R$ as a set, for instance, with $(+,.)$ being the addition and scalar multiplication defined in the way that is familiar to even high-school students. But this is not only way to define a vector space structure on the set $\mathbb R$. We can use it to construct a different addition and scalar multiplication operation $(+',.')$ using the former as follows:

  1. $v+'w=v+w-2$
  2. $k.'v = k.v - k.2 + 2$.

One has to show that as defined above $(\mathbb R, +', .')$ forms a vector space. I will show one of the axioms as a hint, and leave it as an exercise for you to show that the addition and scalar multiplication so defined satisfies all the axioms of a vector space.

Some sort of distributivity. $k.'(v+'w)=k.(v+'w)-k.2+2=(k.v)+(k.w)-(4.k)+2$; similarly, $k.'v+'k.'w=(k.v-k.2+2)+'(k.w-k.2+2)=(k.v)+(k.w)-(k.4)+2$. So, one has $k.'(v+'w')=(k.'v)+'(k.'w)$.

Now, assuming that you have verified the rest; note that the zero of $((\mathbb R, +', .')$ is actually $2$. This can be seen from the definition: Let $0'$ be the additive identity of $((\mathbb R, +', .')$. By the property of additive identity, one has $0'+'a=a$, so $0'+a-2=a => 0'=2$. By a similar computation, the zero of $((\mathbb R, +, .)$ is just the usual $0$. This should convince that zero is not something that is unique to a given set. It is unique for a given vector space structure, but for different ones (even on the same set), the zeroes can be different.

Now, for the idea of a linear map; define $T:\mathbb R \to \mathbb R$ as $T(v)=v+2$. Is this map linear when viewed as a map from $((\mathbb R, +, .)$ to $((\mathbb R, +', .')$? Let's check. For it to be true, the following condition should be satisfied: $T(k_1.v_1+k_2.v_2)=k_1.'T(v_1)+'k_2.'(v_2)$. Note that in the R.H.S, we have $+'$. This does make sense as $T(v) \in (\mathbb R, +', .')$. I leave it to you to verify that it is indeed a linear map. Further, $T(0)=2=0'$; so, things are as expected.

What happens if you instead consider $T(v)=v+3$. Well, the short answer is that it is not a linear map, for the given vector space structures, and rightly so: $T(0)=3\neq0'$. Verify this.

Addendum: This example is not unique in any sense for $\mathbb R$. There exist something called the affine space which are usually filled with a family of vector spaces as described.