Second order linear ODEs and analytic coefficients

1.2k Views Asked by At

A second order, linear, homogeneous ODE can be expressed as:

$$a(x)y''(x) + b(x)y'(x) + c(x)y(x) = 0$$

I have only a basic knowledge about differential equations. However, all the texts state that: in order to find a solution at a specific point $x = x_0$, the functions

$\displaystyle \frac{b(x)}{a(x)}$ and $\displaystyle \frac{c(x)}{a(x)}$

must be analytic at $x_0$.

For example, from this document:

A linear, analytic equation is one for which the coefficient functions are analytic and therefore possess convergent power-series expansions [...]. The simple conclusion will be that the solutions also possess convergent power-series expansions.

Why, if the coefficient functions are analytic, there is guarantee that the solution is analytic, too, and vice-versa?

They are only the coefficients of the unknown function $y(x)$ in that equation, not the function $y(x)$ itself, so how can they affect the function in such a heavy way? If for example $a(x_0) = 0$, how can this determine also the form of $y(x)$ at $x_0$? I can't find the link between these two facts.

2

There are 2 best solutions below

5
On BEST ANSWER

Given a second order ODE

$$ a(x)y''(x)+b(x)y'(x)+c(x)y(x)=0 \tag{1} \label{eq1} $$

We can introduce a new variable $z(x)=y'(x)$ and reduce the equation to a first order system (the same idea works for higher order equations).

$$ \begin{align} y'(x)-z(x)&=0 \\ a(x)z'(x)+b(x)z(x)+c(x)y(x)&=0 \\ \end{align}\tag{2} \label{eq2} $$

We can write the system in a matrix-vector form

$$ \mathbf{A}(x)\cdot\vec{Y}'(x)+\mathbf{B}(x)\cdot\vec{Y}=0 \tag{3} \label{eq3} $$

where

$$ \mathbf{A}(x)=\left(\begin{matrix} 1 & 0 \\ 0 & a(x) \\ \end{matrix}\right) \tag{4} \label{eq4} $$

$$ \mathbf{B}(x)=\left(\begin{matrix} 0 & -1 \\ c(x) & b(x) \\ \end{matrix}\right)\tag{5} \label{eq5} $$

$$ \vec{Y}(x)=\left(\begin{matrix} y(x) \\ z(x) \\ \end{matrix}\right) \tag{6} \label{eq6} $$

if $a(x)\neq 0$ in the integration domain we can find $\mathbf{A}^{-1}$ given by

$$ \mathbf{A}^{-1}(x)=\left(\begin{matrix} 1 & 0 \\ 0 & \frac{1}{a(x)} \\ \end{matrix}\right) \tag{7}\label{eq7} $$

Using $\eqref{eq7}$ we can rewrite $\eqref{eq3}$ in the following way: $$ \vec{Y}'(x)=-\mathbf{A}^{-1}(x)\cdot\mathbf{B}(x)\cdot\vec{Y}=-\mathbf{C}(x)\cdot\vec{Y} \tag{8} \label{eq8} $$ where

$$ \mathbf{C}(x)=\left(\begin{matrix} 0 & -1 \\ \frac{c(x)}{a(x)} & \frac{b(x)}{a(x)} \\ \end{matrix}\right). \tag{9}\label{eq9} $$ Notice that $\mathbf{C}$ contains the ratio of the coefficients. If $\mathbf{C}(x_1)\cdot\mathbf{C}(x_2)=\mathbf{C}(x_2)\cdot\mathbf{C}(x_1)$, $\forall\{x_1,x_2\}\in D(\mathbf{C})$ the solution to $\eqref{eq8}$ is similar to the first order scalar case:

$$ \vec{Y}=\mathbf{K\cdot\mathrm{e}}^{-\int \mathbf{C}(x)\mathrm{d}x} \tag{10} \label{eq10} $$ where we use the matrix exponential and $\mathbf{K}$ is a constant matrix. In that case, we required $\frac{c(x)}{a(x)}=k_1,\frac{b(x)}{a(x)}=k_2$, where $k_1,k_2$ are constants. A general solution is given by the Magnus expansion (see ref. 1 and ref. 2).

Perhaps, it is easier to analyze the effect of the coefficients for a first order equation: $$ y'=-\alpha(x)y $$

the solution in this case is

$$ y=\kappa\mathrm{e}^{-\int \alpha(x)\mathrm{d}x} $$

Therefore, the solution is the composition of $\int \alpha(x)\mathrm{d}x$ and the exponential function. Given a $C^{n}$ function at $x_0$, its integral is $C^{n+1}$. For example, consider the following step-like function: $$ \alpha(x)=\left\{\begin{matrix} 0 & x<0 \\ 1 & x\geq 0 \\ \end{matrix}\right. $$

It is discontinuous at $x = 0$; its integral in $(-\infty,x]$ is a $C^0$ function (i.e. a continuous function):

$$ \int_{-\infty}^x\alpha(s)\mathrm{d}s=\left\{\begin{matrix} k & x<0 \\ k+x & x\geq 0 \\ \end{matrix}\right. $$ where $k$ is a constant. Regarding the questions:

  1. Why, if the coefficient functions are analytic, there is guarantee that the solution is analytic, too, and vice-versa? Given the integration of the matrix $\mathbf{C}$, you need the ratio of the coefficient to be analytic, i.e., $C^{\infty}$ in order to get a solution which is $C^{\infty}$ as well.

  2. They are only the coefficients of the unknown function $y(x)$ in that equation, not the function $y(x)$ itself, so how can they affect the function in such a heavy way? Given the solution $\eqref{eq10}$, we see that the coefficients fully determine the solution. Therefore, they strongly affect the resulting solution (they are part of the solution).

  3. If for example $a(x_0)=0$, how can this determine also the form of $y(x)$ at $x_0$? In that case you find a singularity at $x_0$, the solution won't be analytic in that case (it may be $C^n$ in the best case).

3
On

It's easier to see if we take it down to first order. Let's look at $$ a(x)y'(x) + b(x)y(x) = 0$$

To solve this equation, we can simply separate the variables and integrate, so $$ \int\frac{y'(x)}{y(x)}\text dx = c - \int \frac{b(x)}{a(x)}\text dx \\ \Rightarrow y(x) = C\exp\left( \int f(x)\text dx \right) = C\exp(F(x)) $$ where $f(x) = -\frac{b(x)}{a(x)}$ and $F'(x) = f(x)$. The question here is, when is the solution $y(x)$ an analytic function at some given point $x_0$? Well, an analytic function is one whose derivatives all exist at $x_0$, so let's find the derivatives of $y$. The expression for the $n$-th derivative of $\exp(F(x))$ is

$$\frac{\text d^n \exp(F(x))}{\text dx^n}\biggr|_{x=x_0} = \exp(F(x_0)) \sum_{k=0}^n \frac{1}{k!} \sum_{j=0}^k (-1)^j \binom{k}{j}F(x_0)^j \frac{\text d^n F(x)^{k-j}}{\text dx^n}\biggr|_{x=x_0} $$

The only time this expression doesn't exist for some $n$ is if the $p$-th derivative of $F$ doesn't exist for some $0\leq p\leq n$. In other words, $\exp(F(x))$ is analytic iff $F(x)$ is. Since $F$ is the integral of $f$, $F$ is analytic iff $f$ is analytic, and recall that $f = -\frac{b(x)}{a(x)}$. Thus,

$$ y(x) \text{ is analytic } \Leftrightarrow \frac{b(x)}{a(x)} \text{ is analytic. } $$

The same idea is true for higher order equations; the only issue is that "integrating" higher order equations isn't as simple as actually integrating a function as it was in the first order case. However, making use of the Picard iteration, we can show the exact same thing for higher order equations (albeit with a lot more work).