When dividing two functions: $$h(x)=\frac{f(x)}{g(x)},$$
how do we account for the points at which $g(x)=0$ ?
An example is when solving a PDE by separation of variables: Let $\phi(x,y,z)=X(x)Y(y)Z(z)$, then:$$\nabla^2\phi=0\leftrightarrow YZX''+ZXY''+XYZ''=0$$All $\bf {math}$ textbooks, at this step, divide both sides by XYZ, leading to:$$\frac{X''}{X}+\frac{Y''}{Y}+\frac{Z''}{Z}=0$$ But none of them explains why such an operation is valid when there is no requirement for $X,Y,Z$ not to be zeros.
Another exapmple is Sturm-Liouville equation. In some textbooks, it has the form:$$[p(x)y']'+[q(x)+\lambda r(x)]y=0$$ However, other textbooks divide both sides by $r(x)$ and rearrange the terms to obtain another form (without any justification for the division):$$\frac{1}{r(x)}[(p(x)y')'+q(x)y]+\lambda y=0$$ But we all know that a Sturm-Liouville problem can be singular, which means $r(x)$ could be zero at an endpoint. Obviously, the two forms of Sturm-Liouville equation above are not algebraically equivalent to each other.
I totally agree with you. But I think the reason that the textbooks do not concern the case for a zero denominator is to give you some kind of "conjecture" of the exact formula of the solution. In particular, such methods should make sense for non zero denominators in the following meaning: Suppose you are looking for diffentiable solutions. Then by the continuity of the solution (which comes from your assumption), the denominator should be non zero not only at this point, but also in a neighborhood of this point, which means that diffenrential calculations involving denominators are also valid. Therefore for the non zero points, such methods make sense. Of course, I don't think that these methods can totally explain the distribution of the zeros.
For completeness consider for example the ODE $\dot{x}=x^2,x(0)=x_0$. Separating varables you have the solution $x=\frac{1}{\frac{1}{x_0}-t}$. But this method is only avaliable for $x\neq 0$. In fact, due to the Picard-Lindelöf, no matter $x_0$ is zero or not, we know that a unique solution $x$ exists and it is only defined on a maximal open intervall $(a,b)$. For $x_0\neq 0$, we deduce that $a>-\infty$ or $b<\infty$. For $x_0=0$, we obtain only the trivial solution $x=0$. From this we see the following conclusion: Normally, we first show that a differntiable solution does exist, hopefully even unique. Secondly, we can reconstruct such solutions by using the method given in the OP. This is also the general method in mathematical research: First existence and uniqueness, then the exact expression of the solution.
Looking back to the ODE $\dot{x}=x^2$, one can also equivantly define the differential equation $\frac{\dot{x}}{x^2}=1$. But this expression does not make sense at $x=0$. This is also to be clarified. In some cases, one can define this in Lebesgue spaces, which allows singular points up to zero measure. But in the most cases, mathematician prefer a product expression but not a division. This is in fact the work that you actually should do at the beginning: What kind of solutions are you looking for, what kind of spaces are you working with etc.. With all of the definitions you can now define a "well-defined" question and look for a "well-defined" solution.
I hope this can help you to understand this problem better.