Rule for $\langle x,y\rangle$ if we know orthonormal base?

69 Views Asked by At

How to define $\langle x,y \rangle$ in space of polinoms, where $1, x-1 , 1-x^2$ are orthonormal base($\Vert a\Vert = 1$, $\langle a1, a2\rangle = 0$)?

I'm a bit lost, I know how to do it with my mind for this example, but if somebody can give exact way of solving problems like this I would be really grateful, so any help is welecome :)

3

There are 3 best solutions below

0
On

If you have an orthonormal base, then implicitly, you already have the multiplication defined. If you have polynomials $p$ and $q$, you can write them as a linear combination of basis vectors:

$$p=\alpha_1 p_1 + \alpha_2 p_2 + \alpha_3 p_3,\\ q=\beta_1 p_1 + \beta_2 p_2 + \beta_3 p_3.\\$$

Now, you can calculate $$\langle p, q\rangle=\langle\alpha_1 p_1 + \alpha_2 p_2 + \alpha_3 p_3, \beta_1 p_1 + \beta_2 p_2 + \beta_3 p_3\rangle$$ by using linearity of the inner product. So you know that

$$\langle p,q\rangle = \langle \alpha_1 p_1, \beta_1 p_1 + \beta_2 p_2 + \beta_3 p_3\rangle + \langle \alpha_2 p_2, \beta_1 p_1 + \beta_2 p_2 + \beta_3 p_3\rangle + \langle \alpha_3 p_3, \beta_1 p_1 + \beta_2 p_2 + \beta_3 p_3\rangle.$$

Now, let's just calculate the first element in the sum above:

\begin{align} \langle \alpha_1 p_1, \beta_1 p_1 + \beta_2 p_2 + \beta_3 p_3\rangle &= \langle \alpha_1 p_1, \beta_1 p_1\rangle + \langle \alpha_1 p_1, \beta_2 p_2\rangle+\langle \alpha_1 p_1, \beta_3 p_3\rangle\\ &=\alpha_1\beta_1\langle p_1,p_1\rangle + \alpha_1\beta_2\langle p_1,p_2\rangle +\alpha_1\beta_3\langle p_1,p_3\rangle\\ &=\alpha_1\beta_1 + 0 + 0.\end{align}

Hopefully, you can now see that writing everything to the end would result in $$\langle p,q\rangle = \alpha_1 \beta_1 + \alpha_2 \beta_2 + \alpha_3 \beta_3 .$$

0
On

Take general polynomials $a_1+b_1x+c_1x^2$ and $a_2+b_2x+c_2x^2$; write them as a linear combination of $1, x-1, 1-x^2$, and use the linearity in the first component and the "quasi-linearity" in the second component (which is just linearity in the real case) to find their inner product.

0
On

If you express polynomials in $P_n$ as coordinate vectors $u$, $v$ relative to the standard basis $(1,x,x^2,\dots,x^n)$, then an inner product can be expressed as $\langle v,w\rangle = w^TAv$, where $A$ is some symmetric square matrix. The condition that a set of vectors is orthonormal relative to this inner product can thus be expressed as $B^TAB=I$, where $B$ is the matrix whose columns are the coordinates of those vectors relative to the standard basis. If $B$ is a square matrix (i.e., the vectors form a basis for $P_n$) we can easily solve for $A$: $A=(B^T)^{-1}B^{-1}=(BB^T)^{-1}$. Note that this is just an application of the change of basis formula in disguise: relative to the given basis, the inner product is just the usual dot product that you’re already familiar with, so the matrix of this inner product relative to the given basis is the identity.

In your case, $B=\pmatrix{1&-1&1\\0&1&0\\0&0&-1}$, so $$A=\left[\pmatrix{1&-1&1\\0&1&0\\0&0&-1}\pmatrix{1&0&0\\-1&1&0\\1&0&-1}\right]^{-1}=\pmatrix{3&-1&-1\\-1&1&0\\-1&0&1}^{-1}=\pmatrix{1&1&1\\1&2&1\\1&1&2},$$ i.e., the inner product of $a_0+a_1x+a_2x^2$ and $b_0+b_1x+b_2x^2$ is $$b_1(a_1+a_2+a_3)+b_2(a_1+2a_2+a_3)+b_3(a_1+a_2+2a_3).$$