Linear independence of an infinite set of functions

348 Views Asked by At

I came across this question in one of the linear algebra textbooks:

Proof the linear independence of the following set $\{f_i ∣ i\in\mathbb{N}\}$, such that $f_i : \mathbb{N}\to\mathbb{Q}$ defined as follows:

For $n\in\mathbb{N}$, $$f_i (n) = \begin{cases} -n & \text{for }n \geq i \\ 0 & \text{for }n < i \end{cases} $$

What would be an appropriate approach to check the independence of this infinite set? I thought long about it but was unable to come up with anything.

Any help would be appreciated. Thanks!

3

There are 3 best solutions below

4
On BEST ANSWER

View each function as an infinite vector
$$v_i = \left( \begin{array}{c} f_i (1)\\ f_i (2) \\ f_i (3)\\ \vdots \end{array}\right) = \left( \begin{array}{c} 0\\0\\\vdots\\0\\-i\\ -i-1\\-i-2\\\vdots\end{array}\right)$$

in $\prod_{n\in\mathbb{N}}\mathbb{N} $, and show that the $v_i $ are linearly independent (this is more natural way to view it in my opinion). For example, for $i=5$, $v_i = (0,0,0,0,-5,-6,-7,...)$

Then to check linear Independence, we check that the following is true only when all coefficients $a_i $ are zero: $$ \sum_{i\geq 1} a_i v_i = \left( \begin{array}{c} -a_1 \\ -2(a_1 + a_2) \\ -3 (a_1 + a_2 + a_3) \\ \vdots \end{array}\right) = \left( \begin{array}{c} 0 \\ 0 \\ 0 \\ \vdots \end{array}\right)$$

But that's clear, since if we solve for the first entry we get $a_1 = 0$. Substituting that into the next entry we get $a_2 = 0$. Continuing to infinity, we see that $$ a_i = 0 \qquad \mbox{ for all } i\geq 1$$

1
On

I assume that in your class $0\notin \Bbb N$ (as otherwise the claim is clearly false).

Assume $\sum_i c_i f_i=0$ with almost all $c_i=0$. For any $n>1$, we have $$ 0=\sum_i c_if_i(n)-\sum_i c_if_i(n-1)=\sum_i c_i(f_i(n)-f_i(n-1))=-nc_n,$$ hence $c_n=0$. Thus $0=\sum_i c_if_i(1)=c_1f_1(1)=-c_1$ and also $c_1=0$.

5
On

Consider the series

$$\sum_{i=1}^\infty \alpha_i f_i (n)$$ for scalars $\alpha_1, \alpha_2, \dots$.

We aim to prove that this series is never zero unless all scalars are zero.

First note that $\forall i \in \mathbb{N} \space \exists n \in \mathbb{N}$ such that $n \geq i$. So $\forall n \in \mathbb{N} \space $$\sum_{i=1}^\infty f_i (n) \neq 0$ because all terms are either zero or negative and there exists a term where $n \geq i$ and so $f_i(n) = -n \neq 0$. So the series cannot be zero by virtue of all of the function values being zero.

The only other way that this series could be zero is if some of the scalars $\alpha_1, \alpha_2, \dots$ are negative and so there is some cancellation in the terms. But since ($\forall i \in \mathbb{N} \space \exists j \in \mathbb{N}$ such that $\forall k>j, j>i$), there are an infinite number of terms in the series less than any given nonzero term, so no matter how small a sum $f_i + f_{i+1} + f_{i+2} + \dots + f_{i+a}$ ($a \in \mathbb{N}$) is, there will always exist a term in the series that is less than this sum. In this way, we can never ‘cancel out’ all of the terms.

So $\sum_{i=1}^\infty \alpha_i f_i (n) \neq 0$ for scalars $a_1, a_2, \dots$ unless $\alpha_i = 0 \space \forall i \in \mathbb{N}$.

So the elements of $\{f_i | i \in \mathbb{N}\}$ are linearly independent since the only solution to this equation is where all of the scalars are zero.