It's most of the time quite easy to solve linear differential equations (LDE) thanks to all the result we have in linear algebra.
Yet there is something I don't understand, let's say we have the following LDE :
$$a_n(t)y^{(n)} +...+ a_0(t)y = b(t)$$
where $a_i$ are continuous functions. Then in my book they always put this equation in the following "resolved form" :
$$y^{(n)} + \frac{a_{n-1}(t)}{a_n(t)} y^{(n-1)} +...+ \frac{a_0(t)}{a_n(t)} y = \frac{b(t)}{a_n(t)}$$
I am wondering why this is useful to get the equation in this form?
I mean are there theorems in linear algebra that doesn't apply or techniques that don't work if we let the equation in the form : $$a_n(t)y^{(n)} +...+ a_0(t)y = b(t)~?$$
Thank you!
The slight difference is that the first form is, in a sense, an implicit equation while the second is, also in a certain sense, in explicit form.
The second form also emphasizes that the domain of definition of the LDE excludes the hyperplanes where $a_n(t)=0$, as at these locations the order of the ODE collapses. This is what is called a singularity. The order is equal to the dimension of the solution space in general ODE theory, it has to be constant on the domain of the ODE, thus the need to exclude the zeros of an from the domain to be able to apply the general theory, Picard-Lindelöf etc.
The conditions of regularity for these singularities are also easier to formulate in the explicit form.