Confusion about ODE

159 Views Asked by At

so I am in a class for ODE and for me is is moving a bit quick. I am one year behind most of the class but thats note anything rare. But I am feeling very stumped on something now. Because, usually I am able to follow along and understand but there is something in this class ( Series solutions) that I am having difficulty with.

I hope you guys can bear with the extra bit of text, as I want to be able to adequately explain what I am asking/looking for.

So, to get the first part out of the way, my issue is not with any of the basics of series, that is, I am fine with what series are, index shift etc, and convergence/divergence.

Also just for reference of equations I am often referring to;

(1)$$P(x)y''+Q(x)y'+R(x)y=0$$ in the neighbourhood of a singular point $x_o$.

and the general euler equation $$x^2y''+\alpha x y'+\beta y=0$$

I was following along well, and I started feeling a bit more confused when we started to talk about Euler equations and solutions near a singular point.

For example, whenever there is a second order linear ODE with non constant coefficients, are we supposed to use series methods?

For a second order ODE with solutions near ordinary points I think I am okay. I just use the same method of saying suppose $$y=\sum_{n=0}^{\infty}a_nx^{n}$$ and then solve for $y'$ and $y''$ and substitute back into my question and try to find some recursion relation.

But here is where I am lost and looking for any help; When we began to talk about series solutions near singular points. What I believe, but I do not know if it is a correct belief , is that this type occurs when we have a equation of the form similar to euler, but not just an x^2 in front of the y''. So in this case we divide everything by the coefficient of $y''$, and multiply by $x^2$ .

I also do understand how to see if a singular point is regular or not by taking limit etc.

But I am just completely lost on the topic of 'indicial equation'. That is , in regard to $$L[y]=x^2y''+x[xp(x)]y'+[x^2q(x)]y=0$$ where $$xp(x)=\sum_{n=0}^{\infty}p_nx^{n}$$ and $$x^2q(x)=\sum_{n=0}^{\infty}q_nx^{n}$$ and then it talks about how we now seek a solution of the form $$y=\phi (r,x)=x^r(\sum_{n=0}^{\infty}a_nx^{n}$$ where $a_o \neq 0$. Then there is some more substituting , which I don't really understand and then it says,

or in another form $$L[\phi](r,x)=a_oF(r)x^{r}+\sum_{n=0}^{\infty}(F(r+n)a_n+\sum_{n=0}^{n-1}a_k[(r+k)p_{n-k}+q_{n-k}))x^{r+n}=0$$ where $$F( r)=r(r-1)+p_or+q_o$$

and eventually reaches the conclusion that $$y_1(x)=x^{r_1}[1+\sum_{n=)}^{\infty}a_n(r_1)x^n]$$ , $x \gt 0$ and same for $y_2(x)$ but with $r_2$.

I am just having so much difficulty understand what this means. What is different now that we introduced a new type of solution. The thing is, my book has no examples of this or solutions. I would really wish I could see atleast one working through this process.

I hope what I am saying makes some sense, again I apologize for the long text and I indeed understand how trivial these things are to many here. I do understand that It is probably something I should have been able to understand and maybe it is very simple but over my head or something.

Edit: I have been working on some of it. And I am wondering if my understand from the book is correct.

If we were to consider say $$2x^{2}y''-xy'+(1+x)y=0$$

and take note that x=0 is a regular singular point, with limits giving xp(x)=-1/2 and $x^{2}q(x)=-1/2$, then how from this does the book conclude that the corresponding euler equation is $2x^{2}y''-xy'+y=0$? did it just multiply by 2 and that changes nothing?

Anyways, if anyone can provide insight, comments, suggestions, help, in regard to it all that would be very nice. Thank you a lot for time.

1

There are 1 best solutions below

1
On BEST ANSWER

Starting with your Euler equation

\begin{align} x^{2} y'' + \alpha x y' + \beta y &= y'' + \frac{\alpha y'}{x} + \frac{\beta y}{x^{2}} \\ &= 0 \end{align}

we can see that our ODE will be undefined at $x = 0$ unless

$$\frac{\alpha}{x} \ \ \ (1)$$

and

$$\frac{\beta}{x^{2}} \ \ (2)$$

are analytic at $x = 0$. For example, if $\alpha$ and $\beta$ are just constants, then $(1)$ and $(2)$ are undefined.

So the Frobenius method is a way to find a power series solution to those differential equations that have a singularity at some point.

The Frobenius method is almost exactly the same as finding a normal power series solution, except that you need to solve an extra equation. If you look at the example on the Wikipedia page, you can see that there is a singularity/pole/undefined point at $z = 0$. So we will take a power series solution ansatz as normal, but instead of assuming $\sum_{k} A_{k} z^{k}$, you take an ansatz of the form $\sum_{k} A_{k} z^{r + k}$. Substituting the series in and cleaning up (as you would for a 'normal' power series solution) gives

$$\underbrace{ (r - 1)^{2} A_{0} z^{r - 2} }_{\text (1)} + \underbrace{ \sum_{k = 1}^{\infty} (k + r - 1)^{2} A_{k} - A_{k - 1} z^{r + k -2} }_{\text (2)} = 0$$

Now, we can see that there are two equations we are required to solve. As each component of our equation must equal zero, we must have that $(1)$ and $(2)$ both equal zero individually. Solving the indical equation $(1)$ (this is the only part that is different to solving with a 'normal' power series)

$$F(r) = (r - 1)^{2} = 0$$

gives us a value for $r$, $r = 1$. We then substitute that value into $(2)$ to get

\begin{align} (k + 1 - 1)^{2} A_{k} - A_{k - 1} &= k^{2} A_{k} - A_{k - 1} \\ \implies A_{k} &= \frac{A_{k - 1}}{k^{2}} \end{align}

which is exactly what you would do for a 'normal' power series solution. You then solve for your coefficients as you would usually and get your power series.

If you want some practice, try doing these ones (maybe try the Bessel ODE first, then the Chebyshev, though the Chebyshev ODE has power series solution given on the site). Alternatively, search through the math stackexchange pages for some examples to practice with.