Bessel function why decide to ignore odd terms?

344 Views Asked by At

When solving the Bessel function: $$y''+\frac{1}{x}y'+(1-\frac{\nu^2}{x^2})y=0$$ With the series solution: $$y=x^\alpha \sum^\infty_{n=0} a_n x^n$$ You get the following outcomes: $$(\alpha^2-\nu^2)a_0=0$$ $$((\alpha+1)^2-\nu^2)a_1=0$$ and $$a_{n+2}=-\frac{a_n}{(n+\alpha)^2-\nu^2}$$ In every source I can find people take $\alpha=\nu$ and $a_1=0$. But aren't we justified to take $a_0=0$ and $\alpha+1=\nu$, if so why does no one do it and if not why not?

2

There are 2 best solutions below

0
On

Let us suppose that we do take the former case with $\alpha=\nu-1$ then our series expansion simply becomes: $$y=x^{\nu-1}\sum^\infty_{n=1,3,..}a_nx^n$$ where $$a_{n+2}=-\frac{a_n}{(n+\nu-1)^2-\nu^2}$$ If we now do a change of indices $n=m+1$ we get: $$y=x^{\nu-1}\sum^\infty_{m=0,2,..}a_nx^{m+1}$$ $$=x^\nu \sum^\infty_{m=0,2,..}a_mx^{m}$$ where: $$a_{m+2}=-\frac{a_m}{(m+\nu)^2-\nu^2)}$$ Which is exactly the same expansion as if you where to take $\alpha=\nu$. In other words it doesn't matter which you take, they both give you the same answer. In this series we can define $a_0$ to be the first non-zero coefficent.

0
On

You actually found that one can either chose $(a_0,a_1)=(1,0)$ or $(0,1)$.

1st case $a_0=1$ and $a_1=0$.

The recurrence relation you have tells you that $a_k=0$ for all odd $k$ and that $a_k\neq0$ for all even $k$. This is the solution one can find everywhere and it is called the Bessel function $J_\nu$. The immediate property of $J_\nu$ that you have found is that $J_\nu$ is even and that for $\to0$, $J_\nu(x)\sim x^\nu$.

2nd case $a_0=0$ and $a_1=1$.

Let us start from the beginning. We have $[(\alpha+1)^2-\nu^2]a_1=0$ so we can chose $\alpha=\nu-1$ (the case $\alpha<0$ will appear later). Let us write the expansion : we get $$y(x)=x^{\nu-1}\big(a_1x+a_3x^3+\cdots\big)=x^\nu\big(a_1+a_3x^2+\cdots\big)$$ which is exactly the same as the 1st case... This method yields $J_\nu$ as well, as you have discovered yourself.

But the Bessel equation is of second order, it must have another solution than $J_\nu$, which means independent of $J_\nu$. But "where" is it ?

Actually, as you wrote $y(x)=x^\nu\big(a_0+a_1x+\cdots\big)$ you have made a very strong assumption : you have assumed that a solution has such an expansion. This is not obvious because the Bessel equation is singular at $x=0$. So you can guess that an independent solution is also singular at $x=0$.

Let us call $u(x)$ a solution of the Bessel equation, we do not assume any form of expansion. We have $$xu''+u'+xu=\nu^2 \frac{u}{x}\tag a$$ and of course $$xJ''_\nu+J'_\nu+xJ_\nu=\nu^2\frac{J_\nu}{x},\tag b$$ so if we compute $J_\nu\times$ (a)$-u\times$(b) we get $$x(J_\nu u''-uJ''_\nu)+J_\nu u'-uJ'_\nu=0$$ that we can rewrite $$\frac{\mathrm d}{\mathrm dx}x\big(J_\nu u'-uJ'_\nu\big)=0.$$ This is easily integrated into $$x\big(J_\nu u'-uJ'_\nu)=A$$ ($A$ is a constant) which is a differential equation of first order for $u$. Dividing by $xJ_\nu^2$ you get $$\frac{u'}{J_\nu}-u\frac{J'_\nu}{J_\nu^2}=\frac{A}{xJ_\nu^2}.$$ The last step consist in integrating with respect to $x$ and we get $$\frac{u}{J_\nu}=A\int\frac{\mathrm dx}{xJ_\nu(x)^2}+B$$ or $u(x)=BJ_\nu(x)+AY_\nu(x)$ with $$Y_\nu(x)=J_\nu(x)\int\frac{\mathrm dx}{xJ_\nu(x)^2}\underset{x\to0}{\sim} x^\nu\int\frac{\mathrm dx}{x\,x^{2\nu}}=\left\{\begin{array}{cc}-\frac{1}{2\nu}x^{-\nu}&\quad\text{if $\nu>0$},\\ \ln x&\quad\text{if $\nu=0$}.\end{array}\right.$$

So you now understand that the solution $Y_\nu$ could not be obtained under the form you have been using. This also gives the solution with the choice $\alpha<0$, and it is not surprising to observe that $Y_\nu$ is singular at $x=0$.

More relations involving Bessel functions can be found in Abramowitz and Stegun, chapter 9.