Matrix representation of a co-domain restriction of a linear operator

1.2k Views Asked by At

Consider the finite-dimensional linear operator:

$\mathcal{A}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3},$ with $Ax=y,$ $A=\left[\begin{array}{ccc} 1 & 0 & 1\\ 1 & -2 & -1\\ 0 & 1 & 1 \end{array}\right].$

Let $\mathcal{A}_{1}:\mathbb{R}^{3}\rightarrow\mathcal{R}\{A\}$ be the co-domain restriction of $\mathcal{A}$ to $\mathcal{R}\{A\}$. Give the matrix representation $A_{1}$ of $\mathcal{A}_{1}.$

I have gotten $\mathcal{R}\{A\}$, which is 2-dimensional. So I know the matrix representation of the restricted operator will be a 2x3 matrix, which maps $\mathbb{R}^{3}$ to the plane parametrized by the 2 vectors which span the range. I am unsure of how to obtain the restricted operator. Any advice would be greatly appreciated. Thank you!

2

There are 2 best solutions below

5
On BEST ANSWER

The matrix representation you are asking is not unique, it will depend on the basis of $\mathcal{R}\{A\}$ you are considering. To find a basis for $\mathcal{R}\{A\}$, find $Ae_1$ and $Ae_2$: you get that $$Ae_1=\left[\begin{array}{ccc} 1 & 0 & 1\\ 1 & -2 & -1\\ 0 & 1 & 1 \end{array}\right]\left[\begin{array}{c} 1 \\0\\0 \end{array}\right]=\left[\begin{array}{c} 1 \\1\\0 \end{array}\right],\,\,\,\,\,Ae_2=\left[\begin{array}{ccc} 1 & 0 & 1\\ 1 & -2 & -1\\ 0 & 1 & 1 \end{array}\right]\left[\begin{array}{c} 0 \\1\\0 \end{array}\right]=\left[\begin{array}{c} 0 \\-2\\1 \end{array}\right].$$ Those are independent and $\mathcal{R}\{A\}$ is two dimensional, so they span it. Now, for the general $z=(a,b,c)$: $$Az=\left[\begin{array}{ccc} 1 & 0 & 1\\ 1 & -2 & -1\\ 0 & 1 & 1 \end{array}\right]\left[\begin{array}{c} a \\b\\c \end{array}\right]=\left[\begin{array}{c} a+c \\a-2b-c\\b+c \end{array}\right]=\left[\begin{array}{c} a+c \\a+c\\0 \end{array}\right]+\left[\begin{array}{c} 0 \\-2b-2c\\b+c \end{array}\right]=$$$$(a+c)Ae_1+(b+c)Ae_2.$$ So, a matrix representation (with respect to the usual basis of $\mathbb R^3$ and the basis $Ae_1, Ae_2$) is $$\left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & 1 & 1 \end{array}\right].$$

1
On

Note that the matrix representation $A_1$ of ${\cal A}_1$ will depend on which basis you take. Let's assume you take the standard basis on the left, and on the right the basis constisting of the two vectors you've just found, say $v_1$ and $v_2$.

Now express ${\cal A}(e_1)$ (i.e., the first column of the matrix $A$) as a linear combination of $v_1$ and $v_2$. Those two coefficients will be the first column of the matrix $A_1$.

Do the same for ${\cal A}(e_2)$ (the second column of $A$), putting the result in the second column of $A_1$; and for ${\cal A}(e_3)$ (the third column of $A$), putting the result in the third column of $A_1$.

Note that this is going to be easiest if $v_1$ and $v_2$ are two columns of the original matrix $A$ to start with, because then two of your new columns are just going to be $(1,0)$ and $(0,1)$.