Polynomial basis - why orthogonal?

278 Views Asked by At

I am currently working my way through a Functional Analysis book in self-study and have come across a statement I couldn't quite follow. In one chapter, the authors present an inner product space (slightly adapted):

Let the set $Y$ be the closed interval $[a,b]$, let $S$ be the Lebesgue measurable sets in $Y$ and let $\mu$ be the Lebesgue measure. Then, for the equivalence classes of square-integrable functions on $[a,b]$ we can take, as inner product between two classes $f$ and $g$,

$$(f,g) = \int_{b}^{a}f(x)g(x)dx$$

where the integral is just the Lebesgue integral. This space is usually referred to as $L_2(a,b)$.

Consider the inner product space $L_2(a,b)$ and take the linearly independent set to be $\{1,t,t^2,...,t^n,...\}$. (It is easily verified that this is a linearly independent set by noting that any polynomial of degree $n$ has at most $n$ zeros).

I don't quite follow this last step - how does the fact that each polynomial has a potentially different number of roots imply linear independence?

4

There are 4 best solutions below

0
On BEST ANSWER

To say that they are linearly independent is to say that for any finite number of monomials $\{ t^{r_i} \}_{i=1}^{m} $, the function $g(t)=\sum\limits_{i=1}^m c_i\cdot t^{r_i}$ is the zero function, i.e $g(t)=0$ for all $t\in [a,b]$, if and only if $c_i=0$ for every $i$. For the function to be the zero function, it has to have roots at every-point, or infinitely many roots. But for every finite collection of monomials, such a $g(t)$ would be a polynomial of a finite degree and would have finitely many roots if it is not equal to the zero polynomial.

Edit: This fact also has nothing to do with orthogonality, in fact I think that the collection of monomials is not orthogonal ever given your definition of inner product. To clarify further, orthogonality implies linear independence, but not the other way around. In fact even without an inner product on the space we still have a notion of linear independence.

0
On

from definition of linearly independence, all you have to show is follows..

(Linear independence(LI) in infinite dimensional space (in fact, general version) is defined by LI of its finite dimensional subspaces.)

$$ a_n t^n +...+a_0 = 0 \quad where\, t \in [a,b]\,\, \Rightarrow\,\, a_0 = a_1 = ...= a_n = 0 $$

if coefficient are not - all zeros, it is contradiction by Fundamental theorem.

0
On

For $n \in \mathbb N_0$ let $p_n(t):=t^n$ Now let $ k \in \mathbb N$ and $a_0,a_1,...,a_k \in \mathbb R.$ We have to show that

$$(*) \quad a_0p_0+a_1p_1+...+a_kp_k=0$$

implies that $a_0=a_1=....a_k=0.$ From $(*)$ we see that $a_0+a_1x+...+a_kx^k=0$ for all $x \in (a,b).$

But the polynomial $a_0+a_1x+...+a_kx^k$ has at most $k$ zeros. Hence $a_0=a_1=....a_k=0.$

0
On

The implicit argument is the following:

Suppose we have a linear relation (with finite support) between monomials: $$\lambda_1 t^{n_1}+\lambda_2 t^{n_2}+\dots+\lambda_r t^{n_r}=0,\qquad n_1<n_2<\dots<n_r.$$ The l.h.s. is a polynomial of degree $n_r>0$. Therefore it has at most $n_r$ zeros, which contradicts that it is $0$ for all $t$.