Alternative proof for linear independence of n solutions of a nth-order ordinary differential equation

713 Views Asked by At

I am a beginner with differential equations and I came up with the definition of existence and uniqueness theorem for an $n$th-order differential equation in a book which I am referring to. For this question to be precise I must give the definition of the existence and uniqueness theorem for a nth-order ordinary differential equation.

Definition : Let $L(y)(x)=y^{(n)}(x)+p_{1}(x)y^{(n-1)}+...+p_{n}y(x)=0,x\in I$ be a $n$th-order ODE where $p_{1},p_{2}...p_{n} $ be defined on the interval $I$ which consists of a point $x_{0}$ and $a_{0},a_{1}...a_{n-1}$ be $n$ constants. Then, there exists a unique solution $\phi$ on $I$ of the $n$th-order ODE given above satisfying the initial conditions : $\phi(x_{0})=a_{0} , \phi'(x_{0})=a_{1} , ... , \phi^{(n-1)}(x_{0})=a_{(n-1)}$.

A note is also given as

Note : Suppose that $\phi_{1}(x),...,\phi_{n}(x)$ are $n$ solutions of $L(y)(x)=0$ given above and suppose that $c_{1},c_{2},...,c_{n}$ are $n$ arbitrary constants. Since $L(\phi_{1})=L(\phi_{2})=...=L(\phi_{n})=0$ where $L$ is a linear operator, hence we have $$L(c_{1}\phi_{1}+c_{2}\phi_{2}+...+c_{n}\phi_{n})=c_{1}L(\phi_{1})+...+c_{n}L(\phi_{n})=0.$$

In case the $n$ solutions are linearly independent then $$c_{1}\phi_{1}+...+c_{n}\phi_{n}=0,~~ x\in I \implies c_{1}=c_{2}=...=c_{n}=0$$

Next the question comes. It is given that we have to prove that for a third order ODE $y'''+p_{1}(x)y''+p_{2}(x)y'+p_{3}(x)y=0$ has a three linearly independent solutions for $x\in I$ and $p_{1}, p_{2}, p_{3}$ are continuous functions on the interval $I$.

To prove this the author says that

Using the existence and uniqueness theorem for nth-order ODE stated above, we conclude that there exists solutions $\phi_{1}(x),\phi_{2}(x),\phi_{3}(x)$ of the given ODE such that for $x_{0}\epsilon I$

$\phi_{1}(x_{0})=0, \phi_{1}^{'}(x_{0})=0,\phi_{1}^{''}(x_{0})=0 $

$\phi_{2}(x_{0})=0, \phi_{2}^{'}(x_{0})=1,\phi_{2}^{''}(x_{0})=0 $

$\phi_{3}(x_{0})=0, \phi_{3}^{'}(x_{0})=0,\phi_{3}^{''}(x_{0})=1 $

and then the author further proceeded with his proof for the question. There are two thing I did not understand here. The first one is the linear operator $L$ part in the definition [ how $L(\phi_{1})=...=L(\phi_{n})=0$ ? what kind of linear operator is this? any example? ] and second how did the author get to this conclusion from existence and uniqueness theorem?

I also checked the wronskian and that is identically 0

1

There are 1 best solutions below

6
On BEST ANSWER

The first one is the linear operator $L$ part in the definition [ how $L\phi_1=L\phi_2=\cdots=L\phi_n=0$? what kind of linear operator is this? any example? ]

We are working on a space of functions - say, all real-valued functions that are infinitely differentiable on a certain interval. This is, of course, a vector space over $\mathbb{R}$. The derivative $D$ ($D(f)=f'$) and higher derivatives $D^k(f)=f^{(k)}$ are linear operators from this space to itself. So are multiplication by functions of $x$ $M_{p_i}$ given by $M_{p_i}(y)(x) = p_i(x)y(x)$. Composing these linear operators, as in $y(x)\to p_i(x)y^{(n-i)}(x)$, also gives linear operators. Then we add them up and get $L$, the linear operator that takes $y(x)$ to $y^{(n)}(x)+p_1(x)y^{(n-i)}(x)+\cdots + p_{n-1}(x)y'(x)+p_n(x)y(x)$.

That's the point; every linear differential equation is of the form $Ly=0$ for some linear operator $L$, which is defined with a combination of derivative operators, multiplication by functions, and addition.

...and second how did the author get to this conclusion from existence and uniqueness theorem?

We choose our base point $x_0$ for the initial conditions. Then, there is a linear map $X$ from our space of functions to $\mathbb{R}^n$ there - we take $f$ to the vector $(f(x_0),f'(x_0),\dots,f^{(n-1)}(x_0))$. If the functions $f_1,f_2,\dots,f_m$ are linearly dependent with the relation $c_1f_1+c_2f_2+\cdots+c_mf_m=0$, then their images $X(f_j)$ satisfy the same dependence relation $c_1X(f_1)+c_2X(f_2)+\cdots+c_mX(f_m)=0$. By the contrapositive, if the $X(f_j)$ are linearly independent, so are the $f_j$. (This is a standard fact of linear algebra)

But then, the existence-uniqueness theorem (existence half) says we can find functions $\phi_i$ satisfying the differential equation with $\phi_i^{(i-1)}(x_0)=1$ and $\phi^{(k)}(x_0)=0$ for each other $k$ in $\{0,1,\dots,n-1\}$. Under our linear map $X$, these $\phi_i$ map to a linearly independent set - the standard basis of $\mathbb{R}^n$. Pulling back, the $\phi_i$ must also be linearly independent.

Where does the uniqueness half of the existence-uniqueness theorem come in? That tells us that we can't have more than $n$ linearly independent solutions; if we did, we would have a nonzero solution $\phi_{n+1}$ with $\phi_{n+1}(x_0)=\phi_{n+1}'(x_0)=\cdots=\phi_{n+1}^{(n-1)}(x_0)=0$, and uniqueness rules that out.

I also checked the wronskian and that is identically 0

It shouldn't be. It would help to fix the typo there - it should be $\phi_1(x_0)=1$. Also, the leading term in the definition of $L$ should be $y^{(n)}$.