Method of Frobenius Why is There a Logartihmic Solution?

1.4k Views Asked by At

When solving a problem with the method of Frobenius, if the difference of the roots of the indicial equation differ by a natural number, the smaller root of the indicial equation does not produce a solution because there is no value of its $N$th term that will work where N is the natural number difference between the roots. However, there is another solution in the form $$y_2(x) = u(x) -b_N y_1(x)\log x$$ where $y_1$ is the first solution, $u(x)$ is a Frobenius series with obtained with the smaller root, and $b_N$ is the Nth term of $u(x)$. (Ordinary Differential Equation by Morris Tenenbaum and Harry Pollard). I tried to use Reduction of Order, but couldn't figure out how that would work.

1

There are 1 best solutions below

7
On BEST ANSWER

A good way to understand this (and indeed the actual method that Frobenius employed, rather than the watered-down modern version) is to consider the expansion with a more general power in it than the one that satisfies the indicial equation.

The simplest example is the Euler–Cauchy equation $$ x^2 y'' + pxy'+qy = 0, $$ where $p,q$ are constants. This has a regular singular point at $x=0$. If we try a solution of the form $y_{\alpha} = x^{\alpha}$, we find $$ x^2 y_{\alpha}'' + pxy_{\alpha}'+q y_{\alpha} = f(\alpha)x^{\alpha}, \tag{1} $$ where $f(\alpha) = \alpha(\alpha-1)+p\alpha+q$. So a solution occurs when $f(\alpha)=0$. If $f$ has two roots $\sigma_{\pm}$, then $x^{\sigma_{\pm}}$ are linearly independent solutions. On the other hand, if $f$ has only one root $\sigma$, it can be written in the form $f(\alpha) = (\alpha-\sigma)^2$. Since differentiating this with respect to $\alpha$ and evaluating it at $\sigma$ still gives a root, this suggests that we can find a second solution by looking at $\partial y_{\alpha}/\partial \alpha$. Indeed, since the differential operator does not depend on $\alpha$, we can differentiate $(1)$ and change the order of differentiation, which gives $$ \left(x^2 \frac{d^2}{dx^2} + px \frac{d}{dx} +q \right) \left.\frac{\partial y_{\alpha}}{\partial \alpha} \right|_{\alpha=\sigma} = (f(\sigma)\log{x}+f'(\sigma))x^{\sigma} = 0. $$ Of course $$ \left.\frac{\partial y_{\alpha}}{\partial \alpha} \right|_{\alpha=\sigma} = x^{\sigma}\log{x}. $$


The same idea can be used for equations where $p$ and $q$ are not constant. Given the differential equation $$ Ly = x^2 y'' + p(x)xy'+q(x)y = 0, $$ where $p,q$ have power series expansions $\sum_{k=0}^{\infty} p_k x^k$ and $\sum_{k=0}^{\infty} q_k x^k$ respectively, the action on a general power is $$ L x^{\alpha} = f(\alpha,x)x^{\alpha}, $$ where $f$ is analytic in $x$: it has an expansion $$ f(\alpha,x) = \sum_{k=0}^{\infty} f_k(\alpha)x^{k}. $$ In particular, $f_0(\alpha) = \alpha(\alpha-1)+p_0\alpha+q_0$ and $f_k(\alpha) = p_k\alpha+q_k$ for $k>0$. We now insert the Frobenius series $y_{\alpha}(x) = \sum_{k=0}^{\infty} a_k x^{k+\alpha}$, and find $$ Ly_{\alpha}(x) = \sum_{k=0}^{\infty} f(k+\alpha)a_k x^{k+\alpha} = \sum_{k=0}^{\infty} \Big( f_0(k+\alpha)a_k + f_1(k+\alpha-1)a_{k-1} + \dotsb + f_k(\alpha)a_0 \Big)x^{k+\alpha}. $$ For this to be satisfied over a whole region, we need every coefficient of $x$ to be zero, which gives the system $$ f_0(\alpha)a_0 = 0 \\ f_0(\alpha+1)a_1 + f_1(\alpha)a_0 = 0 \\ f_0(\alpha+2)a_2 + f_1(\alpha+1)a_1 + f_2(\alpha)a_0 = 0 \\ \vdots $$ The first of these is of course the indicial equation. Satisfying it means picking a root, and hence an exponent $\sigma$. But we can see that if $f_0(\sigma+s)=0$ for a positive integer $s$, the recurrence relations may break down.

Suppose that all but the first equation are satisfied, but we still have a general $\alpha$. Then the coefficients are now rational functions of $\alpha$, $a_k = a_k(\alpha)$, and $$ Ly_{\alpha} = f_0(\alpha)a_0 x^{\alpha}. $$

We now have the well-known three cases:

  • $f_0(\alpha)$ has two roots that do not differ by an integer. Both ordinary Frobenius solutions work fine.

  • $f_0(\alpha)$ has a double root $\sigma$. One solution is $y_{\sigma}(x)$. The same trick as before works to find the other: by differentiating with respect to $\alpha$, we find by the same argument as before $$ L \left. \frac{\partial y_{\alpha}}{\partial \alpha} \right|_{\alpha=\sigma} = (f_0(\sigma)(a_0'(\sigma)+a_0(\sigma)\log{x})+f_0'(\sigma))x^{\sigma} = 0, $$ so a second solution is given by $$ \left. \frac{\partial y_{\alpha}}{\partial \alpha} \right|_{\alpha=\sigma} = \sum_{k=0}^{\infty} (a_k(\sigma)\log{x}+a_k'(\sigma))x^{k+\sigma} = y_{\sigma}(x)\log{x} + \sum_{k=0}^{\infty} a_k'(\sigma) x^{k+\sigma}. $$

  • $f_0(\alpha)$ has two roots $\sigma$ and $\sigma-s$, $s \in \{1,2,3,\dotsc\}$. $y_{\sigma}(x)$ is one solution. The other is found in a similar manner to the previous one, but we have to come up with a way for the recurrence relation $$f_0(\alpha-s+s)a_s + f_1(\alpha-s+s-1)a_{s-1}+\dotsb+f_s(\sigma)a_0 = 0 \tag{2} $$ to be satisfied. A way to do this is to choose $a_0 = f_0(\alpha+1)f_0(\alpha+2)\dotsm f_0(\alpha+s)a$, where $a$ is constant. Then $(2)$ will be satisfied. A side effect of this that one can prove is that with this $a_0$, $y_{\sigma-s}$ is proportional to $y_{\sigma}$, so is not an independent solution. But we have $$ Ly_{\alpha} = f_0(\alpha)a_0(\alpha)x^{\alpha} = (\alpha-\sigma)(\alpha-\sigma+s)^2 g(\alpha)x^{\alpha}, $$ where $g(\sigma-s) \neq 0$. Then differentiating with respect to $\alpha$ gives $$ L\left. \frac{\partial y_{\alpha}}{\partial \alpha} \right|_{\alpha=\sigma-s} = \left. (\alpha-\sigma+s)(\dots)x^{\alpha} \right|_{\alpha=\sigma-s}, $$ where the second bracket is analytic in $\alpha$, and hence the right-hand side vanishes and $$\left. \frac{\partial y_{\alpha}}{\partial \alpha} \right|_{\alpha=\sigma-s} = \dotsb = Ay_{\sigma}(x)\log{x} + \sum_{k=0}^{\infty} a_k'(\sigma-s) x^{k+\sigma-s}$$ is a second linearly independent solution; $A$ is the constant of proportionality between $y_{\sigma-s}$ and $y_{\sigma}$. We stress again that in this case, $a_0(\alpha)$ has been chosen to have a particular form so that $a_0(\sigma-s)=0$; the form chosen normally makes the calculations simplest. Choosing a different function of $\alpha$ that vanishes at $\sigma-s$ will give a different series that is proportional to this one.)

Once one knows what form the series can take, it is normally easier to actually just substitute the appropriate form in to derive conditions on the coefficients, rather than actually doing the differentiation, but I think this is a good theoretical reason for the forms that occur. (In the last case, if we look for a series of the form $\sum_{k\geq 0} (A_k\log{x}+B_k)x^{k-\sigma+s}$, an ambiguity occurs in the recurrence relation for $B_s$ since it vanishes identically; this amounts to the freedom to add on a multiple of $y_{\sigma}$.) This exposition can be found, with examples, in Forsyth's Theory of Differential Equations, pp. 243–258.