Solving $y''=\lambda y$

166 Views Asked by At

"Show that all solutions of $y''(x)=\lambda y(x)$ on $0\leqslant x \leqslant L$ with $y(0)=y(L)=0$ are of the form $c\sin\left(\frac{k\pi}{L}\right)x.$ (Hint: write down all solutions of the o.d.e and impose boundary conditions.)"

Here is an image from the textbook I'm using describing this problem. I'm not sure what the hint means; what entails "writing down all solutions"? I'm not sure what method I should use to begin approaching this problem.

3

There are 3 best solutions below

2
On

Use the auxiliar equation $m^2= \lambda$. If $\lambda<0$, $\lambda= -\alpha^2$. You have $m^2=-\alpha ^2$. The solutions are $m_1= \alpha i$ and $m_2=-\alpha i$. The solution is $y(x)=c_1 \cos (\alpha x) +c_2 \sin (\alpha x)$. Now $$ y(0)= c_1=0$$ $$y(L)=c_2\sin (\alpha L)= 0$$ Then $\sin (\alpha L)=0$. So $\alpha L=\pi k$ then $ \alpha = \pi k / L$. The solution is $$y(x)=c_2 \sin \left (\dfrac{\pi k}{L} x \right )$$

0
On

The unique solution of $y''=\lambda y$ with $y(0)=0,\;y'(0)=1$ is $$ y_{\lambda}(x)=\frac{\sin(\sqrt{\lambda}x)}{\sqrt{\lambda}}. $$ This works in the limiting case as $\lambda\rightarrow 0$, which is $y_0(x)=x$. So the only $\lambda$ for which there is a non-trivial solution of $y''=\lambda y$ with $y(0)=0,\;y(1)=0$ are the values of $\lambda$ for which $y_{\lambda}(1)=0$. $\lambda=0$ does not work, but $\lambda=n^2\pi^2$ does work for $n=1,2,3,\cdots$. So the only solutions of $y''=\lambda y$ with $y(0)=y(L)=0$ are those for which $\sqrt{\lambda}=n\pi/L$ or $\lambda=n^2\pi^2/L^2$. So the solutions of the eigenvalue problem are multiples of $$ \sin(n\pi x/L),\;\;\; n=1,2,3,\cdots . $$

0
On

As the others have already provided the solutions, I will only elaborate on the completeness proof of the solution set, which seems to be of particular interest to you.

I am not sure if you are already supposed to know the math (linear algebra, complex analysis) for your excercise, but let's see. Also, there are probably other ways to prove it.

As you have already proposed in the comments, you can prove completeness in this special case by transforming the second order ODE into a first order ODE system and finding the solutions by straightforward integration, which ensures that you do not miss a solution.

You can transform $y^{\prime\prime}=-\alpha^2 y$ to first order by the substitution $$z := \frac{y^\prime}{\alpha}$$ This equation and the substituted one can now be written in the matrix form $$\frac{d}{dx}\left( \begin{array}{c} y\\z\end{array}\right)=\left( \begin{array}{cc} 0 & \alpha\\-\alpha & 0\end{array}\right)\left( \begin{array}{c} y\\z\end{array}\right)$$ or more simply $$q^\prime=Aq$$ You can diagonalize the matrix $A$ (if you multiplied it by the imaginary unit, it would become hermitian and hence, diagonalizable) by finding its eigenvalues and eigenvectors $$Aq_\kappa=\kappa q_\kappa\qquad or \qquad AQ=QK$$ The eigenvalues can be found by setting $${\rm det}(A-\kappa I)=\kappa^2+\alpha^2=0$$ which results in $$\kappa_{1,2}=\pm i\alpha$$ So the corresponding eigenvectors solve the equations $$\left( \begin{array}{cc} -i\alpha & \alpha\\-\alpha & -i\alpha\end{array}\right)\left( \begin{array}{c} y_1\\z_1\end{array}\right)=0\qquad {\rm and} \qquad \left( \begin{array}{cc} i\alpha & \alpha\\-\alpha & i\alpha\end{array}\right)\left( \begin{array}{c} y_2\\z_2\end{array}\right)=0$$ which leads to $z_1=iy_1$ and $z_2=-iy_2$ and hence (after normalizing the eigenvectors so that $Q$ becomes unitary), $$Q=\left( \begin{array}{cc} i\sqrt{1/2} & -i\sqrt{1/2}\\ \sqrt{1/2} & \sqrt{1/2}\end{array}\right) \qquad , \qquad K=\left( \begin{array}{cc} i\alpha & 0\\0 & -i\alpha\end{array}\right)$$ Now you can transform the first order ODE to the eigenbasis of $A$ which results in a diagonal ODE system $$q^\prime=Aq=QKQ^+q \qquad i.e. \qquad p^\prime := Q^+q^\prime = KQ^+q=:Kp$$ In components of $p=(p_1,p_2)^T$, this equation reads $$p_1^\prime = i\alpha p_1 \qquad {\rm and} \qquad p_2^\prime = -i\alpha p_2$$ Each of those can be integrated elementarily, which makes it impossible to miss a solution (and so eventually, proves completeness). For example the first one: $$i\alpha =\frac{p_1^\prime}{p_1}=\frac{d}{dx}({\rm ln} (p_1))$$ $${\rm ln} (p_1)=i\alpha x + c_1$$ $$p_1=a_1\cdot {\rm e}^{i\alpha x}=a_1 \cos(\alpha x)+i a_1\sin(\alpha x)$$ Similarly for $p_2$: $$p_2=a_2\cdot {\rm e}^{-i\alpha x}=a_2 \cos(\alpha x)+i a_2\sin(-\alpha x)$$ Then you only have to transform the general solution for $p$ back to $q$, i.e. $$q=Qp$$ and impose the reality condition for $q$ (to find allowed coefficients $a_1$ and $a_2$), which results in the general sine/cosine solutions others have already pointed out. After imposing the boundary conditions you have the wellknown solutions, which you have now shown to be complete.