Linear independence of functions $\phi_{n}(\zeta) := 1/(n+\zeta)$

81 Views Asked by At

Linear independence of functions $\phi_{n}(\zeta) := 1/(n+\zeta)$

From a textbook on linear algebra:

The functions $\phi_{n}: \mathbb{R}_{+} \to \mathbb{R}, \ \phi_{n}(\zeta) := 1/(n+\zeta),\ n = 1,2,...,$ are elements of the $\mathbb{R}$-vector space of functions from $\mathbb{R}_{+}$ to $\mathbb{R}$. Show that the set $A := \{\phi_{n}: n = 1,2,...\}$ is linearly independent.

So to show that the countably infinite set of functions

$$\phi_{1}(\zeta) := \frac{1}{1+\zeta}\\ \phi_{2}(\zeta) := \frac{1}{2+\zeta}\\ ...$$

is linearly independent, I have to show that no finite subset can form a non-trivial linear combination resulting in the zero function.

I tried to show this by reasoning that any such finite subset will have a function with the largest $n$ within the subset, and if the functions within that subset were linearly dependent, then the functions within the larger subset of all functions {$\phi_{1},...,\phi_{n}\}$ would also be. It should therefore be enough to show that any subset {$\phi_{1},...,\phi_{n}\} \ (n \in \mathbb{N}) $ is linearly independent.

I was then trying to show this by induction over n, i.e. to show that if $\phi_{1},...,\phi_{n}$ are linearly independent, then $\phi_{n+1}$ cannot be a linear combination of them. It seems that for every functions $\phi_{1},...,\phi_{n}$ I can find coefficients where the linear combination yields the same value as $\phi_{n+1}$ for n points within the function's domain (for example 1,2,3,...,n), but for any n+1th point (for example at point n+1) the value of that same linear combination would be different from that of $\phi_{n+1}$. (This would make sense - if they are linearly independent, then n degrees of freedom in chosing the coefficients for n fixed points of the function should result in one specific solution, but for n+1 fixed points there should be no solution.) But I was unable to find a way to prove this step. Expanding the equation between the linear combination and $\phi_{n+1}$ results in large polynomials and I am not sure how to find a contradiction.

Am I on the right track here? How would one prove this?

3

There are 3 best solutions below

1
On BEST ANSWER

Let $\lambda_1,...,\lambda_n \in \mathbb{R}$ with $$\sum_{i=1}^n \lambda_i\phi_i = 0$$ Define $\psi_i(x) := (1+x)...(i-1+x)(i+1+x)...(n+x)$ for all $x \in \mathbb{R}$. Observe $\phi_i(x)(1+x)...(n+x) = \psi_i(x)$ for $x\geq 0$. So, $$\sum_{i=1}^n \lambda_i\psi_i = 0$$ first on $\mathbb{R}_+$ but then also on $\mathbb{R}$, since polynomials only have finitely many zeros. Now, for all $k=1,...,n$, $\psi_i(-k) = \delta_{ik}c_i$ for $c_i = (1-i)...(i-1-i)(i+1-i)...(n-i) \neq 0$ and therefore $$0= \sum_{i=1}^n \lambda_i\psi_i(-k) = \sum_{i=1}^n \lambda_i\delta_{ik}c_i = \lambda_kc_k$$ which implies that $\lambda_k = 0$ for all $k=1,...,n$. Thus $\phi_1,...,\phi_n$ are linearly independent.

1
On

Let $a,b>0$, $b > a$ and $\alpha_i \in \mathbb{R}$. Let $\beta_i>0$ be real distinct constants (in your case, they are integers). Note that for an $n$-subset, suppose that $$\sum_{i = 1}^n\alpha_i \frac{1}{\beta_i+\gamma} = 0.$$ Then,

$$0 = \int_{a}^{b}\sum_{i = 1}^n\alpha_i \frac{1}{\beta_i+\gamma}d\gamma = \prod_{i=1}^n\ln\left((\beta_i+\gamma)^{\alpha_i}\right)\big |_{a}^{b} = \prod_{i=1}^n\ln\left(\frac{(\beta_i+b)^{\alpha_i}}{(\beta_i+a)^{\alpha_i}}\right).$$

On the RHS, we have a finite product of numbers, which is zero only if one of the factors is zero. It follows that one of the $a_j$ must be $0$.

This eliminates one of the terms in the summation on the LHS. Repeat the same argument, now with all other terms except $\alpha_j$ ($n-1$-subset).

Complete the argument via induction. Effectively, we are showing that all coefficients must be zero.

3
On

This can be done very conceptually. I'll switch notation a bit: we can show more generally that the functions $f_a(x) = \frac{1}{x - a}$ are linearly independent for all real $a \in \mathbb{R}$ (and even for all complex $a \in \mathbb{C}$). The idea is simple: each one has a pole at a different location. That is, if

$$g(x) = \sum \frac{c_a}{x - a} = 0$$

is a purported linear dependence, then we just consider what happens in the limit as $x \to a$. Then the term $\frac{c_a}{x - a}$ goes to $\pm \infty$ while the other terms don't, unless $c_a = 0$. But since this linear dependence is zero by hypothesis, it can't go to $\pm \infty$. And since $a$ is arbitrary in this argument, $c_a = 0$ for all $a$.

A similar argument about going to infinity can be used to show that the functions $h_a(x) = e^{ax}$ are linearly independent for all real $a \in \mathbb{R}$, but taking the limit as $x \to \infty$ instead. Both of these can also be done using a clever argument with Taylor series but I think the pole argument is good to see first.