Can a 4D vector living in a subspace R3 be represented by both 3-tuple and 4-tuple?

384 Views Asked by At

$$A = \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 2 & 0 \\ 0 & 2 & 3 & 1 \\ 2 & 0 & 0 & 2 \end{bmatrix}.$$ When $A$ acts on $x = [1,2,3,4]$ we get $img(x) = [9, 8, 13, 12]$ that lives in $c(A)$ with dim = $3$ even though it is expressed as a 4-tuple. If we transpose $A$, bring it to RREF, and transpose it back, we have an equivalent matrix $$B = \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 0 \end{bmatrix}.$$ Now we can take $$B' = \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & -1 \end{bmatrix}$$ as the basis for $c(A)$ and image of $x$ thus can be expressed as $[9, 8, 13]$ using $B'$.

So $A$ maps a vector that lives in $\mathbb R^4$ to one that lives in $\mathbb R^3$ but can be represented by both 3-tuple and 4-tuple?

Please help me!

1

There are 1 best solutions below

2
On BEST ANSWER

The short answer is: yes. In order to understand this, I think it’s important to distinguish between the vectors, which can be any type of object, and their coordinate tuples. This can get confusing when the vectors are themselves tuples of scalars, so I’ll illustrate the ideas with the vector space $P_3[x]$, the space of polynomials in $x$ of degree $\le3$ with real coefficients. Vectors in this space are polynomials of the form $a+bx+cx^2+dx^3$, and it’s a four-dimensional vector space over $\mathbb R$. If you haven’t encountered this before, it’s a useful exercise to verify that, with the usual polynomial addition and scalar multiplication, this space does indeed satisfy the vector space axioms.

Consider the ordered basis $\mathcal E = (1,x,x^2,x^3)$ of this space. Relative to this basis, the coordinates of the vector $\mathbf v = a+bx+cx^2+dx^3$ is just the tuple of its coefficients, $[\mathbf v]_{\mathcal E} = (a,b,c,d)^T$. Thus, elements of this vector space can be represented as 4-tuples of reals. Indeed, fixing a basis $\mathcal B$ for an $n$-dimensional vector space $V$ over the field $\mathbb K$ amounts to defining an isomorphism $\phi:V\to\mathbb K^n$, $\phi:\mathbf v\mapsto[\mathbf v]_{\mathcal B}$.

Now consider the linear map $T:P_3[x]\to P_3[x]$ given by $T:p(x)\mapsto xp'(x)$. That is, $$T[a+bx+cx^2+dx^3] = bx+2cx^2+3dx^3.$$ (Effectively, this map multiplies each term by the exponent of $x$ in that term. It’s a very useful operator for building generating functions, but that’s another story.) I leave verifying that $T$ is linear to the reader. The kernel of $T$ consists of all of the constant polynomials, so the kernel is one-dimensional, and by the Rank-Nullity theorem its image is three-dimensional.

Here’s where things get interesting. We’ve seen above that elements of $P_3[x]$ can be represented by elements of $\mathbb R^4$. However, we can also restrict our attention to just the image of $T$. Since this space is three-dimensional, we can choose some three-element basis $\mathcal B$ for it. Then if $\mathbf w\in\operatorname{im}(T)$, $[\mathbf w]_{\mathcal B}\in\mathbb R^3$. So, elements of the image of $T$—which are polynomials—can can be represented by either 4-tuples or triples of reals, depending on which vector space we’re considering at the time.

In order to represent a linear map $T$ by a matrix, you have to choose bases $\mathcal B$ and $\mathcal C$ for the domain $V$ and codomain $W$, respectively. Then there’s a unique matrix $M$ such that $[T(\mathbf v)]_{\mathcal C}=M [\mathbf v]_{\mathcal B}$. That is, the coordinate tuple of $T(\mathbf v)$ relative to $\mathcal C$ can by computed by multiplying the coordinate tuple of $\mathbf v$ (relative to $\mathcal B$) by $M$. To match the notation for coordinate tuples of vectors, I’ll denote this matrix by $[T]_{\mathcal C}^{\mathcal B}$. For people who are fond of commuting diagrams, this is $\require{AMScd}$ \begin{CD}V @>T>> W \\ @V \phi VV @VV\psi V \\ \mathbb K^n @>T'>> \mathbb K^m \tag{*}\end{CD} where $\phi$ and $\psi$ are the coordinate isomorphisms, and $T':\mathbf w\mapsto [T]_{\mathcal C}^{\mathcal B}\mathbf w$.

Going back to the above example, if we choose the standard basis $\mathcal E$ for both the domain and codomain, the corresponding matrix for $T$ is simply $\operatorname{diag}(0,1,2,3)$. On the other hand, we might instead use the basis $\mathcal C = (1,x,2x^2,3x^3)$ for the codomain, in which case $[T]_{\mathcal C}^{\mathcal E}=\operatorname{diag}(0,1,1,1)$. We can even take it a step further by restricting the codomain of $T$ to its image. As noted previously, the elements of the codomain are still polynomials of degree 3 or less, but it’s a three-dimensional vector space. Taking the ordered basis $\mathcal B=(x,2x^2,3x^2)$ for this space, we then have $$[T\triangleright\operatorname{im}(T)]_{\mathcal C}^{\mathcal E} = \begin{bmatrix}0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}.$$ This matrix takes 4-tuples of reals and spits out triples, even though the corresponding vectors are still elements of $P_3[x]$, which is a four-dimensional space, so they can also be represented by 4-tuples.

Note, by the way, that I’m using the usual mathematical convention of treating elements of $\mathbb K^n$ as column vectors, so that applying a linear transformation corresponds to left-multiplication by a matrix. It looks like you right-multiplied a row vector by $A$ instead, so that the image lives in $A$’s row space, not its column space. You then row-reduced $A$ to get $B$ instead of column-reducing it. I’m going to stick with column vectors and left-multiplication, so the coordinate tuples that I compute below won’t always match yours.

Anyway, the same thing is going in in your example. You have a linear map $T:\mathbb R^4\to\mathbb R^4$ with $T:(x,y,z,t)\mapsto(x+t,y+2z,2y+3z,2x+2t)$. The elements of the domain and codomain are 4-tuples of reals. Relative to any basis of $\mathbb R^n$, their coordinates are also 4-tuples of reals, and this is a common source of confusion. In particular, relative to the standard basis, the coordinates of a vector are the vector itself: $[\mathbf v]_{\mathcal E}=\mathbf v$. The matrix of $T$ relative to the standard basis is your $4\times4$ matrix $A$. Referring to (*), when we say that the image of $\mathbf v$ is the result of multiplying $\mathbf v$ by $A$, that’s really shorthand for $[T\mathbf v]_{\mathcal E}=[T]_{\mathcal E}^{\mathcal E}[\mathbf v]_{\mathcal E}$; there are two hidden isomorphisms $\phi$ and $\psi$ that are the identity map.

Now, row-reducing a matrix changes the basis of its column space. That is, after row-reduction, we have $$B = [T]_{\mathcal C'}^{\mathcal E} = [\operatorname{id}]_{\mathcal C'}^{\mathcal E} [T]_{\mathcal E}^{\mathcal E} = \begin{bmatrix}1&0&0&1\\0&1&0&2\\0&0&1&-1\\0&0&0&0\end{bmatrix}.$$ Here $\mathcal C'$ is the basis $\mathcal C=\left([1,0,0,2]^T,[0,1,2,0]^T,[0,2,3,0]^T\right)$, extended to a basis of all of $\mathbb R^4$. Deleting the last row of this matrix amounts to restricting the codomain of $T$ to its image, and the result of multiplying $[1,2,3,4]^T$ by this $3\times4$ matrix is $[5,10,-1]^T$. These are the coordinates of $T(1,2,3,4)$ relative to $\mathcal C$, when viewed as an element of the three-dimensional space $\operatorname{im}(T)$.