Coefficient Matrix and Properties of $\mathcal{L}_{B}: V\rightarrow \mathbb{R}^{3}$

410 Views Asked by At

Having a bit of trouble with the following question setup:


Denote by $\mathbb{R}[x,y]$ the set of polynomials with two variables $x$ and $y$ and real coefficients. Note that $\mathbb{R}[x,y]$ forms a linear space under polynomial addition and scalar multiplication. Consider a set of elements $\mathcal{B}=(x^2,xy,y^2).$ Denote by $V=Span(\mathcal{B})$ the linear sub-space spanned by $\mathcal{B}$. Define a linear transformation

$T: V\rightarrow \mathbb{R}^{3},\,\,\,\,f(x,y)\mapsto \begin{pmatrix} f(0,1)\\f(1,0)\\f(1,1)\end{pmatrix}\\$.

On the other hand, there is a linear isomorphism $\mathcal{L}_{B}: V\rightarrow \mathbb{R}^{3}$

defined by the $\mathcal{B}$-basis.

1) Find the coefficient matrix of the composition $T\circ \mathcal{L}_{B}^{-1}$. Here we take the standard basis of $\mathbb{R}^{3}$.

2) Is $T$ an isomorphism?


I've seen the $\circ$ symbol in my textbook before, usually to denote change of bases, e.g. $\mathcal{L}_{\mathscr{A}}\circ \mathcal{L}_{\mathscr{B}}$ from $\mathbb{R}^n$ to $\mathbb{R}^n$ has standard matrix $S$ such that $S\vec{x}=\mathcal{L}_{\mathscr{A}}(\mathcal{L}_{\mathscr{B}}^{-1}(\vec{x}))\,\forall\,\vec{x}\in\mathbb{R}^n$.

Is the idea here similar in this case? Are we using $T$ to move from $\begin{pmatrix} f(0,1)\\f(1,0)\\f(1,1)\end{pmatrix}\\$ to $(x^2,xy,y^2)\,$?


Here's my best attempt thus far:

1) I understand "the coefficient matrix of the composition $T\circ \mathcal{L}_{B}^{-1}$" to mean

$\begin{bmatrix}|&|&|\\ [T(x^2)]_{\mathcal{B}}&[T(xy)]_{\mathcal{B}}&[T(y^2)]_{\mathcal{B}}\\|&|&|\end{bmatrix}=\begin{bmatrix}0&0&1\\1&0&0\\1&1&1\end{bmatrix}$, i.e. the matrix that moves us from $\mathbb{R}^3$ to $V$, though I am definitely unsure of how to interpret this specific notation. Reassurance or correction would be greatly appreciated here.

2) Since we are told that $\mathbb{R}[x,y]$ forms a linear space, all we have to do is make sure that $\exists\,B^{-1}$, i.e. make sure that the matrix associated with $T\circ \mathcal{L}_{B}^{-1}$ is invertible. If we use an augmented matrix, basic row operations yield the inverse:

$$ \left[\begin{array}{rrr|rrr} 0 & 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 0 & 0 & 1 \end{array}\right]\rightarrow\left[\begin{array}{rrr|rrr} 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & -1 & -1 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 \end{array}\right] $$

Because $\exists\,B^{-1}=\begin{bmatrix}0&1&0\\-1&-1&0\\1&0&0\end{bmatrix}$, and $V$ is a linear space, the map $R^{3}\rightarrow V$ is an isomorphism.


The main issue I'm having here is determining whether $T\circ \mathcal{L}_{B}^{-1}$ moves from $V$ to $\mathbb{R}^3$ or vice-versa. Feedback on these tentative solutions would be greatly appreciated.

1

There are 1 best solutions below

5
On BEST ANSWER

Your question is a bit different than the problem I solved in my previous answer to your other question. In particular, the basis for the codomain is not mentioned because the mapping is into $\mathbb{R}^3$ where we have the standard choice of Cartesian coordinates. The translation between my notation and your course's notation is simply $\Phi_{\beta} = \mathcal{L}_{\beta}$. The coordinate mapping can be defined by $$ \mathcal{L}_{\beta} (f_j) = e_j $$ where $e_i \cdot e_j = \delta_{ij}$, for example $e_1 = [1,0,\dots,0]^T$. So, the coordinate mapping replaces the abstract $j$-th basis element with the $j$-th standard basis element. Then, to complete the story, we extend linearly: $$ \mathcal{L}_{\beta} (x_1f_1+x_2f_2+x_3f_3) = [x_1,x_2,x_3]^T.$$ In the problem we currently consider, $f_1=x^2, f_2=xy$ and $f_3=y^2$. Ok, so a bit more about the coordinate mapping: as a function $\mathcal{L}_{\beta}: V \rightarrow \mathbb{R}^3$ hence $\mathcal{L}_{\beta}^{-1}: \mathbb{R}^3 \rightarrow V$. Consider then, $$ T \circ \mathcal{L}_{\beta}^{-1}: \mathbb{R}^3 \rightarrow V \rightarrow \mathbb{R}^3 $$ or, more to the point, $T \circ \mathcal{L}_{\beta}^{-1}: \mathbb{R}^3 \rightarrow \mathbb{R}^3$ is a linear transformation of 3-component column vectors. Thus, $[T \circ \mathcal{L}_{\beta}^{-1}] \in \mathbb{R}^{3 \times 3}$ is the standard matrix is naturally calculated as we always calculate the standard matrix: $$ [T \circ \mathcal{L}_{\beta}^{-1}] = [(T \circ \mathcal{L}_{\beta}^{-1})(e_1)|(T \circ \mathcal{L}_{\beta}^{-1})(e_2)|[(T \circ \mathcal{L}_{\beta}^{-1})(e_3)]$$ but,$\mathcal{L}_{\beta}^{-1}(e_1)=f_1$ and $\mathcal{L}_{\beta}^{-1}(e_2)=f_2$ and $\mathcal{L}_{\beta}^{-1}(e_3)=f_3$ hence $$ [T \circ \mathcal{L}_{\beta}^{-1}] = [T(x^2)|T(xy)|T(y^2)] = \left[ \begin{array}{ccc} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end{array}\right]$$ There was an error in my earlier comment, notice $f(x,y)=x^2$ has $f(1,0)=1^2=1$ and $f(1,1)=1^2=1$ hence the first column ought to have two nonzero entries. Of course, this was already presented in your initial answer so I'm not telling you anything new!

A few comments about notation and the task of showing $T$ is an isomorphism (or not)

  • the symbol $\circ$ denotes composition of functions. This is not some secret linear algebra thing, it is a construction you'll see in math in all the courses worth taking. In short, $(f\circ g)(x) = f(g(x))$. I suspect you already knew this, but the symbol $\circ$ didn't connect with the notation you'd seen before.
  • to show a mapping between vector spaces is an isomorphism is to show the mapping is a linear bijection. This means that the mapping is a linear transformation which is injective, surjective and the inverse mapping is also linear. There is a theorem which states the linearity of the inverse of an invertible linear mapping is automatic, so we actually don't need to check that. But, what you know (in terms of theorems) is way outside my knowledge so it's hard to give optimal advice here (ask your professor if your answer is what he wants). I think the calculation you indicate shows that the mapping $F= \mathcal{L}_{\beta}^{-1} \circ T $ is invertible ( I just made up $F$ as a name to do some productive name-calling here) and if we compose both sides with $\mathcal{L}_{\beta}$ we obtain $\mathcal{L}_{\beta} \circ \mathcal{L}_{\beta}^{-1} \circ T = \mathcal{L}_{\beta} \circ F$ hence (the $\mathcal{L}_{\beta} \circ \mathcal{L}_{\beta}^{-1}$ is just the identity map it goes away) $T=\mathcal{L}_{\beta} \circ F$. Then $T$ is the composite of invertible, linear transformations and is hence an invertible linear transformation. In particular, $$T^{-1}=(\mathcal{L}_{\beta} \circ F)^{-1} = F^{-1} \circ \mathcal{L}_{\beta}^{-1}. $$ You can use the mapping identity above and your matrix calculation to write down a formula for the inverse of $T$. See if you can do it.
  • another way to see $T$ is an isomorphism is to notice that $V$ is clearly $3$ dimensional as is $\mathbb{R}^3$. Hence, $T$ is an isomorphism if $T$ is surjective (aka onto). Also, $T$ is an isomorphism if $T$ is injective (aka 1-1). Injective is easily checked by calculation of $\text{Ker}(T) = \{ f(x,y) \in V \ | \ T(f(x,y))=0 \}$. If we can show the kernel is trivial then that means only zero maps to zero and that suffices to show all other points are mapped to distinct points in the range. In this special case of equal-dimensional domain and codomain it automatically follows that when $T$ is injective it is surjective or, if $T$ is found to be surjective then injectivity is given without further investigation. When the dimensions of the domain and codomain are not given then other arguments are needed.