Linear independence of functions $\phi_{n}(\zeta) := 1/(n+\zeta)$
From a textbook on linear algebra:
The functions $\phi_{n}: \mathbb{R}_{+} \to \mathbb{R}, \ \phi_{n}(\zeta) := 1/(n+\zeta),\ n = 1,2,...,$ are elements of the $\mathbb{R}$-vector space of functions from $\mathbb{R}_{+}$ to $\mathbb{R}$. Show that the set $A := \{\phi_{n}: n = 1,2,...\}$ is linearly independent.
So to show that the countably infinite set of functions
$$\phi_{1}(\zeta) := \frac{1}{1+\zeta}\\ \phi_{2}(\zeta) := \frac{1}{2+\zeta}\\ ...$$
is linearly independent, I have to show that no finite subset can form a non-trivial linear combination resulting in the zero function.
I tried to show this by reasoning that any such finite subset will have a function with the largest $n$ within the subset, and if the functions within that subset were linearly dependent, then the functions within the larger subset of all functions {$\phi_{1},...,\phi_{n}\}$ would also be. It should therefore be enough to show that any subset {$\phi_{1},...,\phi_{n}\} \ (n \in \mathbb{N}) $ is linearly independent.
I was then trying to show this by induction over n, i.e. to show that if $\phi_{1},...,\phi_{n}$ are linearly independent, then $\phi_{n+1}$ cannot be a linear combination of them. It seems that for every functions $\phi_{1},...,\phi_{n}$ I can find coefficients where the linear combination yields the same value as $\phi_{n+1}$ for n points within the function's domain (for example 1,2,3,...,n), but for any n+1th point (for example at point n+1) the value of that same linear combination would be different from that of $\phi_{n+1}$. (This would make sense - if they are linearly independent, then n degrees of freedom in chosing the coefficients for n fixed points of the function should result in one specific solution, but for n+1 fixed points there should be no solution.) But I was unable to find a way to prove this step. Expanding the equation between the linear combination and $\phi_{n+1}$ results in large polynomials and I am not sure how to find a contradiction.
Am I on the right track here? How would one prove this?
Let $\lambda_1,...,\lambda_n \in \mathbb{R}$ with $$\sum_{i=1}^n \lambda_i\phi_i = 0$$ Define $\psi_i(x) := (1+x)...(i-1+x)(i+1+x)...(n+x)$ for all $x \in \mathbb{R}$. Observe $\phi_i(x)(1+x)...(n+x) = \psi_i(x)$ for $x\geq 0$. So, $$\sum_{i=1}^n \lambda_i\psi_i = 0$$ first on $\mathbb{R}_+$ but then also on $\mathbb{R}$, since polynomials only have finitely many zeros. Now, for all $k=1,...,n$, $\psi_i(-k) = \delta_{ik}c_i$ for $c_i = (1-i)...(i-1-i)(i+1-i)...(n-i) \neq 0$ and therefore $$0= \sum_{i=1}^n \lambda_i\psi_i(-k) = \sum_{i=1}^n \lambda_i\delta_{ik}c_i = \lambda_kc_k$$ which implies that $\lambda_k = 0$ for all $k=1,...,n$. Thus $\phi_1,...,\phi_n$ are linearly independent.