Division in differential equations when the dividing function is equal to $0$

1k Views Asked by At

When dividing two functions: $$h(x)=\frac{f(x)}{g(x)},$$

how do we account for the points at which $g(x)=0$ ?

An example is when solving a PDE by separation of variables: Let $\phi(x,y,z)=X(x)Y(y)Z(z)$, then:$$\nabla^2\phi=0\leftrightarrow YZX''+ZXY''+XYZ''=0$$All $\bf {math}$ textbooks, at this step, divide both sides by XYZ, leading to:$$\frac{X''}{X}+\frac{Y''}{Y}+\frac{Z''}{Z}=0$$ But none of them explains why such an operation is valid when there is no requirement for $X,Y,Z$ not to be zeros.

Another exapmple is Sturm-Liouville equation. In some textbooks, it has the form:$$[p(x)y']'+[q(x)+\lambda r(x)]y=0$$ However, other textbooks divide both sides by $r(x)$ and rearrange the terms to obtain another form (without any justification for the division):$$\frac{1}{r(x)}[(p(x)y')'+q(x)y]+\lambda y=0$$ But we all know that a Sturm-Liouville problem can be singular, which means $r(x)$ could be zero at an endpoint. Obviously, the two forms of Sturm-Liouville equation above are not algebraically equivalent to each other.

5

There are 5 best solutions below

1
On BEST ANSWER

I totally agree with you. But I think the reason that the textbooks do not concern the case for a zero denominator is to give you some kind of "conjecture" of the exact formula of the solution. In particular, such methods should make sense for non zero denominators in the following meaning: Suppose you are looking for diffentiable solutions. Then by the continuity of the solution (which comes from your assumption), the denominator should be non zero not only at this point, but also in a neighborhood of this point, which means that diffenrential calculations involving denominators are also valid. Therefore for the non zero points, such methods make sense. Of course, I don't think that these methods can totally explain the distribution of the zeros.

For completeness consider for example the ODE $\dot{x}=x^2,x(0)=x_0$. Separating varables you have the solution $x=\frac{1}{\frac{1}{x_0}-t}$. But this method is only avaliable for $x\neq 0$. In fact, due to the Picard-Lindelöf, no matter $x_0$ is zero or not, we know that a unique solution $x$ exists and it is only defined on a maximal open intervall $(a,b)$. For $x_0\neq 0$, we deduce that $a>-\infty$ or $b<\infty$. For $x_0=0$, we obtain only the trivial solution $x=0$. From this we see the following conclusion: Normally, we first show that a differntiable solution does exist, hopefully even unique. Secondly, we can reconstruct such solutions by using the method given in the OP. This is also the general method in mathematical research: First existence and uniqueness, then the exact expression of the solution.

Looking back to the ODE $\dot{x}=x^2$, one can also equivantly define the differential equation $\frac{\dot{x}}{x^2}=1$. But this expression does not make sense at $x=0$. This is also to be clarified. In some cases, one can define this in Lebesgue spaces, which allows singular points up to zero measure. But in the most cases, mathematician prefer a product expression but not a division. This is in fact the work that you actually should do at the beginning: What kind of solutions are you looking for, what kind of spaces are you working with etc.. With all of the definitions you can now define a "well-defined" question and look for a "well-defined" solution.

I hope this can help you to understand this problem better.

3
On

You are right to be concerned about it. It might be that $\phi$ is never zero, in which case none of $X,Y,Z$ can be. An example would be the electric field of a point charge, where $\phi$ goes to $0$ at infinity but is not zero anywhere. It might be that $\phi$ is zero only at isolated points. In those cases you can often ignore them at the start, find your solution, and verify that it works at those points as well. Sometimes you have to deal with it in a limiting sense.

3
On

Most of the time, this is implicit in physics. We just solve the equation without taking into account those issues, and see if it works at the vanishing points.

However, in mathematics, you would need to cut your x interval at the vanishing points, solve in these cut intervals separately, and then study the behavior of the solution at the cuts to see if the problem admits a general solution over all the possible values of x.

0
On

You concern is right but you can consider the case in which at least one of the components of the general form of answer is zero which leads to all-zero trivial answer that satisfies the equation. When talking about Sturm-Liouville equation or something like that and also the conditions that the answers must hold we can postpone $being-zero \ condition$ and consider the limitating behavior around the point where the answer gets zero. To be more accurate this kind of division without minding whether each of the components are zero or not is true when the set of points where at least one component is zero be a zero-set.

0
On

Offcourse dividing by $XYZ$ assumes that $X(x)\ne 0$,$Y(y)\ne 0$ and $Z(z)\ne 0$, because if any of them is zero then this will give rise to $\phi(x,y,z)=0$ i.e. a zero solution to the given pde $\nabla^2\phi=0$.

Zero solutions are not of practical importance which are often discarded. An example to show this is as follows:

Suppose you are required to produce simple harmonic motion with the help of a given pendulum. One way is to do this is to just keep watching the pendulum without giving any initial displacement to it which is nothing but $y=0$ solution to the ODE $y''=-ky$.