Bilinear form Find q $\in\ V$ such that $\phi(p)=\psi(p,q)$ for all $p\in V$

128 Views Asked by At

Let $V$ be the $\mathbb{R}$-Vector space of polynomials with degree $\leq$ 2. Let $\psi\colon V \times V \rightarrow \mathbb{R}$ be the bilinear form $$(p,q) \mapsto \int_{0}^{1}p(x)q(x)dx$$ Let $\phi$ be a linear form $p \mapsto p(0)$. Find a $q \in V$, such that $\phi(p)=\psi(p,q)$ for all $p\in V$

I still have trouble understanding, what this $q$ that I'm looking for actually is and especially how I would manage to find it.

3

There are 3 best solutions below

0
On BEST ANSWER

There are clever choices of "sampling functions" $p$ that you can use to tease out the coefficients of this polynomial $q$, but I'm going to show you the robust (if verbose) method to determine $q$.

The vector space $V$ has a basis $\{1, x, x^2\}$ over $\mathbb{R}$. In other words, each $p \in V$ is expressed uniquely as $$ p = ax^2 + bx + c $$ for some $a, b, c \in \mathbb{R}$. Or in the language of coordinates, with respect to the monomial basis, $p$ has coordinates $$ \begin{bmatrix} a \\ b \\ c \end{bmatrix} \in \mathbb{R}^3. $$

Let's express our unknown special function $q \in V$ in this basis too: $$ q = Ax^2 + Bx + C. $$ Our goal is to determine $A, B, C \in \mathbb{R}$ such that $\phi(p) = \psi(p, q)$ for all $p \in V$.

First, in this basis, what is $\phi(p)$? Evaluate at $x=0$: $\phi(p) = p(0) = c$.

Now, the bilinear form involves the product $pq$, so let's work that out in coordinates: \begin{align} p(x) \, q(x) &= \bigl( ax^2 + bx + c \bigr) \bigl( Ax^2 + Bx + C \bigr) \\ &= aA\,x^4 + (bA + aB)\,x^3 + (cA + bB + aC)\,x^2 + (cB + bC)\,x + cC. \end{align} Now, we're supposed to integrate this over $[0, 1]$. Since $$ \int_0^1 k \, x^n \, dx = \biggl. \frac{k}{n+1} x^{n+1} \, \biggr\rvert_0^1 = \frac{k}{n+1} x^{n+1} $$ and integration is a linear operator, \begin{align} \psi(p,q) &= \int_0^1 \, p(x) \, q(x) \, dx \\ &= \int_0^1 \, \Bigl( aA\,x^4 + (bA + aB)\,x^3 + (cA + bB + aC)\,x^2 + (cB + bC)\,x + cC \Bigr) \, dx \\ &= \tfrac15 aA + \tfrac14 (bA + aB) + \tfrac13 (cA + bB + aC) + \tfrac12 (cB + bC) + cC \\ &= \bigl( \tfrac15 a + \tfrac14 b + \tfrac13 c \bigr) A + \bigl( \tfrac14 a + \tfrac13 b + \tfrac12 c \bigr) B + \bigl( \tfrac12 a + \tfrac12 b + c \bigr) C \end{align} In coordinates, $$ \psi(p, q) = \begin{bmatrix} a & b & c \end{bmatrix} \begin{bmatrix} \tfrac15 & \tfrac14 & \tfrac13 \\ \tfrac14 & \tfrac13 & \tfrac12 \\ \tfrac13 & \tfrac12 & \tfrac11 \end{bmatrix} \begin{bmatrix} A \\ B \\ C \end{bmatrix} $$ Now, the requirement that $\psi(p, q) = \phi(p)$ becomes $$ \begin{bmatrix} a & b & c \end{bmatrix} \begin{bmatrix} \tfrac15 & \tfrac14 & \tfrac13 \\ \tfrac14 & \tfrac13 & \tfrac12 \\ \tfrac13 & \tfrac12 & \tfrac11 \end{bmatrix} \begin{bmatrix} A \\ B \\ C \end{bmatrix} = \begin{bmatrix} a & b & c \end{bmatrix} \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} $$ Since we need this equation to hold for any $a, b, c \in \mathbb{R}$, we need to solve the equation $$ \begin{bmatrix} \tfrac15 & \tfrac14 & \tfrac13 \\ \tfrac14 & \tfrac13 & \tfrac12 \\ \tfrac13 & \tfrac12 & \tfrac11 \end{bmatrix} \begin{bmatrix} A \\ B \\ C \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}. $$ Can you take it from here?

0
On

You are looking for a polynomial $q(x) \in V$ such that $$ \int_0^1 p(x) q(x) = p(0) $$ for all $p(x) \in V$.

The element $q \in V$ is determined by its coefficients: $q(x) = b_0 + b_1 x + b_2 x^2$, for some $b_0, b_1, b_2 \in \mathbb{R}$. Using some clever choices of $p(x)$ you should be able to determine equations satisfied by those coefficients.

0
On

Here is a solution sketch for the generalized question with $V$ being the vector space of real polynomials with degree $\leq n$ (sometimes written as $V=\mathbb{R}_n[X]$), which starts pretty elegant, but then turns ugly because of determinants. I will only describe the elegant part here:

Let $p(x)=\sum_{i=0}^na_ix^i$ (so that $\phi(p)=p(0)=a_0$) and $q(x)=\sum_{j=0}^nb_jx^j$, then we have: \begin{equation} \psi(p,q) =\int_0^1p(x)q(x)\mathrm{d}x =\sum_{i=0}^na_i\sum_{j=0}^nb_j\int_0^1x^{i+j}\mathrm{d}x =\sum_{i=0}^na_i\sum_{j=0}^n\frac{b_j}{i+j+1} \stackrel{!}{=}a_0. \end{equation} The last equation needs to hold for all given $p\in V$. Putting in the $n+1$ vectors of the basis $\{1,x,\ldots,x^n\}$ of $V$ results in $n+1$ equations for the $n+1$ coefficients $\{b_0,\ldots,b_n\}$ of $q$: \begin{equation} \sum_{j=0}^n\frac{b_j}{j+1}=1 \qquad\text{and}\qquad \sum_{j=0}^n\frac{b_j}{i+j+1}=0;\; 1\leq i\leq n. \end{equation} Let $M$ be the $n+1\times n+1$ matrix with $M_{ij}=\frac{1}{i+j+1}$, then we can abbreviate the linear equations as $\sum_{j=0}^nM_{ij}b_j=\delta_{0i}$ using matrix multiplication. Since the vector on the left only has entries of zero with the exeption of only one (pun intended), we should use Cramer's rule and then Laplace expansion for the respective row in $M$ replaced by this vector, which results in only one term.

EDIT: It's not as ugly as I thought. The matrix I described here is a Hilbert matrix whose determinant and inverse is known. Hilbert matrices are special cases of Hankel matrices, a few determinants can be found here.