Change of basis matrix for polynomials?

5.7k Views Asked by At

I've understood what a change of basis matrix is, and how it's structured.

So a change of basis matrix from $B$ to $C$ is the matrix $M$ such that:

$${\begin{bmatrix} &\\ \\ \\\end{bmatrix}}_B = {\begin{bmatrix} &&&&\\ \\ \\\end{bmatrix}}{\begin{bmatrix} &\\ \\ \\\end{bmatrix}}_C$$

or

$$[v]_b = M[v]_c$$

However, my book extends this concept to polynomials. I see no problem if I see the polynomial $1+2t^2+3t^3$ as the vector $\begin{bmatrix} 1\\2\\3\\\end{bmatrix}$ and then I can constructo such matrix. But the exercise asks the following:

The change of basis matrix from the base $B=\{1+t,1-t^2\}$ to the base $C$ is

$$\begin{bmatrix}1&2\\1&-1\end{bmatrix}$$

Find basis $C$

So basically we have this structure:

$${\begin{bmatrix} &\\ \\ \\\end{bmatrix}}_B = \begin{bmatrix}1&2\\1&-1\end{bmatrix}{\begin{bmatrix} &\\ \\ \\\end{bmatrix}}_C$$

where the first column vector is $[b_1]_c$ (vector $b_1$ written in terms of base $C$) and the second is $[b_2]_c$. So in some way I should find the basis $C$, but since there's a $t^2$ term, a '3 dimensional' vector should appear somewhere. How do I proceed?

2

There are 2 best solutions below

0
On BEST ANSWER

Let $A=\begin{bmatrix} 1&2\\1&-1\end{bmatrix}$. Then $A^{-1}=\begin{bmatrix}\frac{1}{3}&\frac{2}{3}\\\frac{1}{3}&-\frac{1}{3}\end{bmatrix}$ is the change of basis matrix from C to B, so

$w_1=\frac{1}{3}(1+t)+\frac{1}{3}(1-t^2)=-\frac{1}{3}t^2+\frac{1}{3}t+\frac{2}{3}$ and

$w_2=\frac{2}{3}(1+t)-\frac{1}{3}(1-t^2)=\frac{1}{3}t^2+\frac{2}{3}t+\frac{1}{3}$ are the basis vectors in C.

(Notice that B and C are bases for a 2-dimensional subspace of the 3-dimensional vector space of polynomials of degree at most 2.)

0
On

Without knowing what book you are using I cannot comment on the correctness of the approach used there. Nevertheless, whatever approach is used, care must be taken about what precise relationship one is using to reason about these things. There are, at least, two equivalent ways of viewing this: 1) in "implicit" terms of coordinate changes and 2) in "explicit" terms of how the bases vectors, themselves, are related.

Perhaps I can make this clear with an example. For a finite-dimensional vector space, the coordinates of a vector $\,\,\vec{x}\,\,$ w.r.t. two given bases can be used to define a change of coordinates from one basis to another. So, for example, if :

i) The coordinates of $\,\,\vec{x}\,\,$ w.r.t. to the bases $B$ and $C$ are $\left(\begin{array}{c}x^B_1 \\ x^B_2\end{array}\right)$ and $\left(\begin{array}{c}x^C_1 \\ x^C_2\end{array}\right)$ respectively,

ii) The $B$ basis vectors have coordinates $\left(\begin{array}{c}1 \\ 1\end{array}\right)$ and $\left(\begin{array}{c}2 \\ -1\end{array}\right)$ w.r.t. to the $C$ basis vectors,

then, the change of coordinates from $B$ to $C$ is given by

$$ \left(\begin{array}{c}x^C_1 \\ x^C_2\end{array}\right){}={}\left(\begin{array}{cc}1 & 2 \\ 1 & -1\end{array}\right)\left(\begin{array}{c}x^B_1 \\ x^B_2\end{array}\right)\,\,\,\,\,\ldots\,(*) $$

However, suppose we were to, instead, explicitly represent each $B$ basis vector (say $\vec{v}_1$ and $\vec{v}_2$) in terms of the $C$ basis vectors (say $\vec{w}_1$ and $\vec{w}_2$). We can do this using matrix notation as well, so that, unlike (*), we now have

$$ \left(\begin{array}{c}\vec{v}_1 \\ \vec{v}_2\end{array}\right){}={}\left(\begin{array}{cc}1 & 1 \\ 2 & -1\end{array}\right)\left(\begin{array}{c}\vec{w}_1 \\ \vec{w}_2\end{array}\right)\,\,\,\,\,\ldots\,(**) $$

In passing, I should say that this kind of matrix representation is, in part, a convenient shorthand: matrices define linear relationships, in this case the explicit linear relationship between the underlying bases vectors. Now, observe that the matrix representation in (**) also characterizes what happens to a vector under a change of basis from $B$ to $C$, since for our arbitrary vector $\,\,\vec{x}\,,\,$ it implies we have

$$ \vec{x}{}={}\left(x^B_1, x^B_1\right)\left(\begin{array}{c}\vec{v}_1 \\ \vec{v}_2\end{array}\right){}={}\left(x^B_1, x^B_1\right)\left(\begin{array}{cc}1 & 1 \\ 2 & -1\end{array}\right)\left(\begin{array}{c}\vec{w}_1 \\ \vec{w}_2\end{array}\right)\,; $$ that is, starting with a "$B$"-representation of $\,\,\vec{x},\,\,$ we can find the unique $C$-representation of $\,\,\vec{x}$.

Note: The transformation matrix in (*) is between coordinates, while that in (**) is between the underlying bases vectors. Also, note that one matrix is the transpose of the other. And, finally, in either case, to obtain the reverse transformation from $C$ to $B$, one simply inverts the respective matrix (safe in the knowledge that the answers will also be the transpose of each other). In your question, if the given matrix represents a "coordinate" transformation (in the sense I have illustrated), then its inverse is

$$ \dfrac{1}{3}\left(\begin{array}{cc}1 & 2 \\ 1 & -1\end{array}\right)\,. $$