Linear algebra - find dual space base

422 Views Asked by At

I'm trying to solve what seems to be a simple problem, but I cannot find the right way to approach it. Here it is:

Let $V$ be a vector space of all polynomials of degree $0$ or $1$. We define the inner product as
$$(v,w)=\int_{0}^{1}v(x)w(x)\,dx. \tag{1}$$ If $\{1,x\}$ is a base for $V$, show that the corresponding dual base for $V^*$ is given by $\{4-6x,-6+12x\}$.

My attempt to this problem was:

We can write any element from $V$ as: $$v=\alpha+\beta x \tag{2}.$$
Using (2) in (1):
$$\int_0^1(\alpha+\beta x)(\gamma+\sigma x)\,dx. \tag{3}$$ But that won't take me to the answer as the integral outputs a number. I thought about integrating from $0$ to $x$, however, I don't know how to justify it. Any help?

5

There are 5 best solutions below

1
On BEST ANSWER

Since you aren't entirely clear on the basics, let's take this from the start.

We have an inner product on $V$ defined as $$ (u, v) = \int_0^1 u(x)v(x)dx $$ Now note that if we choose some fixed polynomial $u_0\in V$, then $$ v\mapsto (u_0, v) $$ is a linear transformation from $V$ to $\Bbb R$. Which is to say, $u_0$ here gives us an element of $V^*$, sending $v$ to $u_0(v) = \int_0^1u_0(x)v(x)dx$.

Given an inner product on a finite-dimensional vector space, this correspondance between the inner product and the dual space gives a full description of $V^*$. In other words, any linear transformation $V\to \Bbb R$ can be seen as "take the inner product with some specific, fixed element of $V$", and of course, a different $u_0$ will give a different linear transformation.

This way, we may write elements of $V^*$ as polynomials. Note that which polynomial represents which linear transformation depends on the inner product. If we had chosen $\int_0^2$ instead, for instance, the correspondence would be different.

We are given a basis $\{1, x\}$ of $V$, and tasked with finding the corresponding dual basis of $V^*$. As the other answers have noted, a "corresponding dual basis" consists of a basis $\{u_0, u_1\}$ of $V^*$ such that $$u_0(1) = u_1(x) = 1\\u_1(1) = u_0(x) = 0$$ Which is to say, we want to find polynomials $u_0, u_1$ such that $$ \int_0^1u_0(x)dx = \int_0^1 xu_1(x)dx = 1\\ \int_0^1xu_0(x)dx = \int_0^1u_1dx = 0 $$ You are given candidate polynomials in the problem text. Verifying them is easy.

0
On

There is some confusion here, since $4-6x,-6+12x\notin V^*$. My guess is that you are after a basis $\{e,f\}$ of $V$ (not of $V^*$) such that $(1,e)=1$, $(1,f)=0$, $(x,e)=0$, and $(x,f)=1$. So you search for polynomials $p(x),q(x)\in V$ such that

  • $\bigl(1,p(x)\bigr)=1$;
  • $\bigl(x,p(x)\bigr)=0$;
  • $\bigl(1,q(x)\bigr)=0$;
  • $\bigl(x,q(x)\bigr)=1$.

You can check for yourself that this is just what happens if you take $p(x)=4-6x$, and $q(x)=-6+12x$.

0
On

If in a vector space $V$ endowed with an inner product we consider a basis $\{v_1,v_2,\ldots,v_n\}$ , then its dual basis is $\{v'_1,v'_2,\ldots,v'_n\}$ defined by the following conditions: $$(v_i,v'_i)=1 \text{ for all }i \quad \text{and} \quad (v_i,v'_j)=0 \text{ for all }i\ne j.$$

So you need to find two polynomials that satisfy these conditions. Here's how you can start on one of them (and you can proceed similarly for the other one). Let $v_1=1\in V$. The corresponding element of the dual basis $v'_1=ax+b$ is defined by the conditions that $$(v_1,v'_1)=\int_0^1 1\cdot(ax+b)\,dx=1 \quad \text{and} \quad (v_2,v'_1)=\int_0^1 x\cdot(ax+b)\,dx=0.$$ These two integrals, when you evaluate them, give you two simple linear equations for $a$ and $b$. Solving them, you will find $a$ and $b$, and thus $v'_1$.

0
On

I want to give you a different perspective here: if $(V,g)$ is a vector space equipped with a non-degenerate symmetric bilinear form $g$, the map $$ V\ni v\mapsto v_\flat =g(v,\cdot)\in V^*$$is an isomorphism, whose inverse is denoted by $\sharp$, so that $g(f^\sharp, w)=f(w)$ for all $w\in V$ and $f\in V^*$. If $(e_1,\ldots,e_n)$ is a basis for $V$ and $(e^1,\ldots,e^n)$ is the dual basis in $V^*$, then $((e^1)^\sharp,\ldots,(e^n)^\sharp)$ is another basis for $V$, such that $g((e^i)^\sharp,e_j)=\delta^i_j$. If $g_{ij} = g(e_i,e_j)$ and $(g^{ij})$ denotes the inverse matrix, then $(e^i)^\sharp =\sum_j g^{ij}e_j$. This means that if you know how to compute the dual basis to $\{1,x\}$, you could deduce what $p(x)$ and $q(x)$ should be, even if the statement of the problem didn't gave you this information beforehand, by computing $$(g^{ij})=\begin{pmatrix} (1,1)& (1,x) \\ (x,1) & (x,x)\end{pmatrix}^{-1}$$and using the previous formula for $(e^i)^\sharp$.

0
On

Combining elements of other answers:

I think that what José Carlos Santos is getting at is that there is, to some extent, some "abuse of notation" in presenting polynomials (elements of $V$) as being basis vectors of $V^*$. Given any inner product $(u,v)$, there is a natural isomorphism between $\phi: V \rightarrow V^*$, namely $\phi(v)(u) = (u,v)$ That is, $\phi$ is a function on $V \rightarrow V^*$, so its output is a function $V \rightarrow \mathbb R$, so applying $\phi$ to $v$ returns a function that can be applied to $u$, and the output of that function is a real number equal to the inner product of $u$ and $v$. So if we want to be truly rigorous, then rather than saying that $\{4−6x,−6+12x\}$ is a basis for $V^*$, we should say that $\{\phi(4−6x),\phi(−6+12x)\}$ is a basis for $V^*$.

As for verifying this claim, it's a simple matter of verifying that vectors of corresponding indices have inner product of $1$, and all other pairs have inner product of $0$. Or, in other words, finding the outer product of $[1,x]^T[4−6x,−6+12x]$ and verifying that it results in the identity matrix.

If you were attempting to find the basis of $V^*$ that corresponds to $\{1,x\}$, rather than verifying an existing one, taking $[1,x]^T[1,x]$ yields $\begin{pmatrix} (1,x)& (1,x) \\ (x,1) & (x,x)\end{pmatrix}$, which evaluates to $\begin{pmatrix} 1 &\frac12 \\ \frac 12& \frac 13\end{pmatrix}$, and the inverse of that is $\begin{pmatrix} 4& -6 \\ -6 & 12\end{pmatrix}$, which gives you the vectors $4-6x$ and $-6+12x$.

We could have taken another set of vectors, and the the final matrix would have given the coefficients for the basis vectors in terms of those vectors. That is, if we have some basis of $V$, we can build a matrix of those vectors. Call that $B$. Then given any basis of $V^*$, we can created a matrix $W$ out of those vectors. We can then create a matrix $U$ as $(WB^T)^{-1}W$. That is, we take the outer product of $B$ and $W$, evaluate the entries according to our inner product, take the inverse, and then use the resulting numbers as coefficients to create linear combinations of the vectors in $W$. Then when we take $B^TU$, we get $B^T(WB^T)^{-1}W=B^T(B^T)^{-1}W^{-1}W=I$. So $U$ defines a basis that is orthonormal to $B$.