Basis of the dual space of $\mathcal{P}_2(\mathbb{R})$.

174 Views Asked by At

I am trying to solve the following problem:

Let $\mathcal{P}_2(\mathbb{R})$ the real vector space of polynomials of degree less or equal than $2$, and let $W$ the vector space of linear maps from $\mathcal{P}_2(\mathbb{R})$ to $\mathbb{R}$. Let $T_1,T_2,T_3\in W$ defined by $$T_1(p) = \int_{-1}^{0}p(x)dx , \quad T_2(p) = \int_{0}^{1}p(x)dx, \quad T_3(p) = \int_{1}^{2}p(x)dx$$ for all $p\in \mathcal{P}_2(\mathbb{R})$. Show that $\{T_1,T_2,T_3\}$ is a basis of $W$.

I know that I have to show that $T_1$, $T_2$ and $T_3$ are linearly independent and span every linear functional of $\mathcal{P}_2(\mathbb{R})$.

Linear Independence: Let $N:\mathcal{P}_2(\mathbb{R})\to\mathbb{R}$ be the null operator, that is to say, $N(p)=0$ for all $p\in \mathcal{P}_2(\mathbb{R})$. Let $\alpha_1,\alpha_2,\alpha_3\in\mathbb{R}$ such that $\alpha_1T_1 + \alpha_2T_2 + \alpha_3T_3 = N$. Let us consider $f(x)=1\in\mathcal{P}_2(\mathbb{R})$. Then, $$\alpha_1T_1(f) + \alpha_2T_2(f) + \alpha_3T_3(f) = N(f)=0.$$ Since $T_1(f) = T_2(f) = T_3(f) = 1$, we have that $$\alpha_1+\alpha_2+\alpha_3=0.$$ Now, let us consider $g(x)=2x+1\in\mathcal{P}_2(\mathbb{R})$. Since $T_1(g)=0$, $T_2(g)=2$ and $T_3(g)=4$, it follows that $$\alpha_2 + 2\alpha_3 = 0.$$ Finally, let us consider $h(x)=-3x^2+1\in\mathcal{P}_2(\mathbb{R})$. Since $T_1(h) = T_2(h) = 0$ and $T_3(h)=-7$, we have that $$\alpha_3=0.$$ By using the other equations, we conclude that $$\alpha_1 = \alpha_2=\alpha_3=0.$$ Therefore, $\{T_1,T_2,T_3\}$ is a linear independent set of $W$.

If we use the fact that $\mathcal{P}_2(\mathbb{R})$ and its dual space have the same dimension, the proof of the linear independence is enough to conclude that this set is a basis of $W$. However, is there a way to prove that $\text{span} \{T_1,T_2,T_3\}=W$?

1

There are 1 best solutions below

0
On BEST ANSWER

Like you have already stated, linear independence + $\dim \mathcal P_2(\mathbb R) = 3$ is sufficient. However, if you wanted to explicitly show that, in fact, $\operatorname{span}\{T_1,T_2,T_3\} = W$, you can always find the coefficients for a general functional by expanding the definition of your candidate. You have to write an arbitrary element as a linear combination of $T_i$ which, without a basis, is hard $-$ the sole fact that $\dim V = \dim V^*$ depends on a choice of a finite basis.

It can then be argued that explicitly showing that $\{T_i\}_i$ generates $W$ consists of showing that an associated dual basis can be generated from the $\{T_i\}_i$. Pick a basis for $\mathcal P_2(\mathbb R)$; for instance, $\mathcal B = \{1,x,x^2\}$ does it. Then $T_i(x^j) = a_{ij}$ (and you compute this value explicitly from the definition, just calculate the integral); therefore, $T_i = \sum_\limits{j=0}^2 a_{ij}(x^j)^*$, so $\operatorname{span}\{T_i\}_i$ is simply the column space of the matrix $A := (a_{ij})$ w.r.t. the chosen dual basis. Now proving that $\operatorname{span}\{T_i\}_i = \mathcal P_2(\mathbb R)$ consists of an explicit computation using the (numerical) columns of $A$; for instance, if $A$ is invertible, then the span is all of the $\mathcal P_2(\mathbb R)$ and then you can write $(x^i)^* = \sum b_{ij}T_j$, where $(b_{ij}) = A^{-1}$.

Not a fun process, so if you know $\dim V$ and you found $\dim V$ linearly independent vectors, jumping across all of that calculations is more pleasant.