Understanding the double dual space in a concrete example

819 Views Asked by At

I am trying to explore the assertion that in finite vector spaces $V^{**}=V.$ This statement is most likely incorrect, as stated on this accepted answer in MathSE, although it seems clearly spelled out as true on this precise point of a youtube series by Ben Garside, excellent in its didactic approach, or in this minute on a lecture by Frederic Schuller, part of the series Gravity and Light.

But I'd like to focus on a concrete example of a vector space presented here as defined over the set of polynomials of fixed degree $(N=7)$ on the interval $(-1,+1)$:

$$\mathcal P := \{p:(-1,+1)\rightarrow \mathbb R \, \vert \, p(x)=\sum_{n=0}^N p_nx^n\}$$

So the vectors are polynomials.

And as an example of a map in $V^*$ taking in a polynomial $p(x)$ and sending it to a real number the definite integral of the polynomial from $0$ to $1$:

$$I(p):= \int_0^1 dx\,p(x).$$

So the question is, What would be a good example of $V^{**}$ based on this polynomial example that let's you see how we "somehow return" to $V$?

Or, in other words, how can you take a definite integral, and send it back to a polynomial? Or, if the statement $V=V^{**}$ is incorrect, can I still have an illustrative example of a map in $V^{**}?$


NOTE: At the end of the lecture by Frederic Schuller basis vectors for $V$ are derived for the polynomial example (with $N=3$) as $\large e_a(x)=x^a:$

$$\begin{matrix}e_0(x)=x^0\\ e_1(x)=x^1\\ e_2(x)=x^2\\ e_3(x)=x^3\end{matrix}$$

and the basis for $V^*$ as $\large \epsilon^a=\frac{\frac{d^{(a)}}{dx} p(x)\,\vert_{x=0}}{a!}$, which should produce the coefficient for the term $a$ in $p(x)$, only that if the $p(x)$ is the basis for $V$ with all coefficients equal to $1$ when $\epsilon_a$ acts as a function on $e_a$ it will act as a indicator function:

$$\epsilon^a(e_b) =\delta^b_a$$

3

There are 3 best solutions below

0
On

First, it is not true that $V^{**} = V$. Rather, $V$ is isomorphic to $V^{**}$, via the linear map $\psi$ which takes a vector $v$ to the map $v \mapsto \phi_v$, where $\phi_v\colon V^* \to k$ is given by $\phi_v(f) = f(v)$.

You describe a vector space $\mathcal{P}$ over $\mathbb{R}$ and an element $I$ of the dual of $\mathcal{P}$. You then want to associate to this element a vector in $\mathcal{P}$. Indeed, a finite dimensional vector space is isomorphic to its dual, but there is in general no natural isomorphism. In fact, we get an isomorphism for every non-degenerate bilinear form on our vector space. See https://en.wikipedia.org/wiki/Bilinear_form

0
On

A notation that is often used is the duality pairing, the bilinear map $\langle \cdot ,\cdot \rangle:W\times V\to K$ allowing one to treat the two linear spaces more equally, before calling one dual to the other. In your example you could write $I(p)=\langle I ,p \rangle$ and take the viewpoint that it's the polynomials that "act" on the linear map $I$.

1
On

I would like to present a concrete example, involving a vector space of polynomials, but simpler than the example mentioned in the question.

The vector space considered, is a two dimensional one, and figures in Problem 11.2 of Lipschutz and Lipson, Reference 1.

Problem 11.2, starts with a definition of our $V$, I quote

Let $V=${$ ~a+bt~:~a,b\in R$}, the vector space of real polynomials of degree $\leq$1.

Reference 1 gives us a basis for the ‘Dual Space’ to $V$, $V^*$, I quote

Find the basis {$v_1,v_2$} that is dual to the basis {$\phi _1, \phi _2$} of $V^*$ defined by \begin{equation*} \phi _1 (f(t))=\int_0^1~f(t)~dt~~~\text{and}~~~\phi_2(f(t))=\int_0^2~f(t)~dt \end{equation*}

So, {$\phi _1, \phi _2$} is, according to Reference 1, a basis of our $V^*$.

I am going to write ‘Linear Functionals’ (LF’s) with a hat on, and to use the following definitions, rather than the notation of Reference 1

\begin{equation*} \hat{\phi} _1 (f(t))=\int_0^1~dt~f(t)~~~\text{and}~~~\hat{\phi}_2(f(t))=\int_0^2~dt~f(t) \end{equation*}

Now, and I will expand afterwards, the example I intend can be presented briefly as,

\begin{align*} V&=\text{\{} ~(a+bt)~:~a,b\in R\text{\}} \\ V^*&=\text{\{} ~ \hat{\phi} : \hat{\phi} = (c_1\hat{\phi} _1 +c_2 \hat{\phi} _2) ,~c_1,c_2\in R\text{\}} \\ V^{**}&=\text{\{} ~( \widehat{a+bt} )~:~a,b\in R\text{\}} \end{align*}

where the LF’s $\widehat{a+bt}$ are defined by a ‘Natural Map’ \begin{equation*} (\widehat{a+bt})(\hat{\phi} )=\hat{\phi} (a+bt) \end{equation*}

In more detail \begin{align*} V&=\text{\{} ~(a+bt)~:~a,b\in R\text{\}} \\ V^*&=\text{\{} ~ \hat{\phi} : \hat{\phi} = (c_1\hat{\phi} _1 +c_2 \hat{\phi} _2) ,~c_1,c_2\in R\text{\}} \\ \end{align*}

\begin{equation*} \hat{\phi} _1 =\int_0^1~dt~~~\text{and}~~~ \hat{\phi} _2 =\int_0^2~dt \end{equation*} \begin{align*} \hat{\phi} _1 (a+bt) & =\int_0^1~dt ~(a+bt)\\ \hat{\phi} _2 (a+bt)&=\int_0^2~dt ~(a+bt) \end{align*}

\begin{equation*} V^{**}=\text{\{} ~( \widehat{a+bt} )~:~a,b\in R\text{\}} ~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{equation*}
where the LF’s $\widehat{a+bt}$ are defined by a ‘Natural Map’ \begin{equation*} (\widehat{a+bt})(\hat{\phi} )=\hat{\phi} (a+bt) \end{equation*}

The right hand side, RHS, of the above is

\begin{align*} \hat{\phi} (a+bt) &= (c_1\hat{\phi} _1 +c_2 \hat{\phi} _2) (a+bt) \\ &= c_1~\int_0^1~dt ~(a+bt) + c_2~\int_0^2~dt ~(a+bt) \end{align*}

The left hand side, LHS, is, well, equal to the RHS of the equation, but you could write, with the first step below, by the definition of the LF's, $\widehat{a+bt}$. \begin{align*} (\widehat{a+bt})(\hat{\phi} )&= \hat{\phi} (a+bt) \\ &= (c_1\hat{\phi} _1 +c_2 \hat{\phi} _2) (a+bt) \\ &= c_1~\int_0^1~dt ~(a+bt) + c_2~\int_0^2~dt ~(a+bt) \end{align*}

So the RHS and LHS of the natural map, are the same expressions and give the same number from $R$.

Reference

1 Seymour Lipschutz, Ph.D., Marc Lipson, Ph.D., Schaum’s Outlines, Linear Algebra, Fourth Edition, The McGraw-Hill Companies, Inc. (2009)