So in this episode we described a way to invert linear operators with constant coefficients, that is operators of the form $O[f] = c_0f + c_1 f' + c_2 f'' ... = \sum_{n=0}^{\infty} c_n f^{(n)}$.
Now I would like to ask how to invert operators whose coefficients are functions, that is operators of the form $O[f] = c_0(x) f + c_1(x)f' + c_2(x) f'' + ... $. This seems to be considerably more difficult for the simple reason that $(c_1(x) f)' \ne c_1(x)f'$ but rather $(c_1(x)f)' = c_1'(x)f + c_1(x)f'$ (and that continues to get messier and messier as one considers higher derivatives).
This Answer is an Active Work In Progress:
This is actually more tractable than many would believe, in principle this problem reduces to merely inverting a matrix... A countably infinite dimensional square matrix that is. Moreover this problem can be seen as the 1-dimensional part of a higher dimensional theory of inverting general non linear operators (I won't include that part on this first writing).
In order for the matrix inversion to be algorithmically easy to do it must be the case that the coefficients are all polynomials which I believe is the "special context" @Giuseppe Negro hinted at. But still, linear operators with arbitrary polynomial coefficients on each derivative is a good start to the general problem.
So begin by considering our operator $O[f] = c_0(x)f + c_1(x)f' + c_2(x)f'' + ... = \sum_{n=0}^{\infty} c_n(x)f^{(n)}$
We can translate this operator to a vector as as $V[O] = \begin{bmatrix} c_0(x) & c_1(x) & c_2(x) & ... \end{bmatrix} $
Now we consider the infinite vector of operators
$$\mathcal{O} = \begin{bmatrix} O \\ O' \\ O'' \\ \vdots \end{bmatrix} $$
So that we can consider the matrix
$$ V(\mathcal{O}) = \begin{bmatrix} V(O) \\ V(O') \\ V(O'') \\ \vdots \end{bmatrix} = \begin{bmatrix} c_0(x) & c_1(x) & c_2(x) & \dots \\ c_0'(x) & c_0(x) + c_1'(x) & c_1(x) + c_2'(x) & \dots \\ c_0''(x) & 2c_0'(x) + c_1''(x) & c_0(x)+2c_1'(x)+c_2''(x) & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} $$
For convenience we will call $V(\mathcal{O})_{i,j}$ so $V(\mathcal{O})_{1,0}$ is just $c_0'(x)$.
So if we can just compute $B = V(\mathcal{O})^{-1}$ then we read the top most row off of $B$ and that gives us coefficients for the inverse operator.
This leads to a family of linear equations on the coefficients of $B$ of the form (note we call $c_0(x)$ just $c_0$ to save space, etc for all $c_n(x)$)
$$ B_{0,0} c_0 + B_{0,1} c_0' + B_{0,2} c_0'' + ... = 1 $$
$$ B_{0,0} c_1 + B_{0,1} \left(c_0(x) + c_1' \right) + B_{0,2} \left(2 c_0' + c_1 '' \right) + ... = 0 $$
Now in the event that all the coefficients $c_i$ are polynomials we see that these equations each involve only finitely many variables so this system becomes quite tractable.
Now what happens if $V(\mathcal{O})$ is not invertible? Simple, we add a row (or multiple) of $0$'s to the top of it, and assume that our inverse has an extra column (or multiple) to its left, so that these $(\infty, k + \infty)$ and $(k + \infty, \infty)$ dimensional matrices multiply out to form an $(\infty, \infty)$ identity matrix. Those extra rows and columns correspond to "integral" terms as opposed to the usual derivative terms.
First Example (still just constant coefficients):
If we let $O = f - f'$ then:
$$\mathcal{O} = \begin{bmatrix} f - f' \\ f' - f'' \\ f'' - f''' \\ \vdots \end{bmatrix} $$
$$ V(\mathcal{O}) = \begin{bmatrix} 1 & -1 & 0 & \dots \\ 0 & 1 & -1 & \dots \\ 0 & 0 & 1 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} $$
Its easy to see then that
$$ \begin{bmatrix} 1 & 1 & 1 & \dots \\ 0 & 1 & 1 & \dots \\ 0 & 0 & 1 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} 1 & -1 & 0 & \dots \\ 0 & 1 & -1 & \dots \\ 0 & 0 & 1 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} = I_{\infty} $$
So then we read the top row $1, 1, 1 ... $ to get that
$$ O^{-1}[f] = f + f' + f'' + f''' + ... $$
which passes our reality check. I'll add my function example next with simple polynomials.
Simple Polynomial Coefficients:
Let's take the operator $O = f + xf'$ from here we see that
$$ V(\mathcal{O}) = \begin{bmatrix} 1 & x & 0 & \dots \\ 0 & 2 & x & \dots \\ 0 & 0 & 3 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} $$
Now we can solve a bunch of linear equations (which arise from multiplying the top of row of our matrix $(a_0, a_1 ... )$ with $V(\mathcal{O})$ and getting a row of an identity matrix $(1,0,0,0...)$. These equations are as follows: $$a_0 = 1 \\ xa_0 + 2a_1 = 0 \rightarrow a_1 = - \frac{x}{2} \\ xa_1 + 3a_2 = 0 \rightarrow a_2 = \frac{x^2}{3!} \\ \vdots \\ a_n = (-1)^n \frac{x^n}{n!} $$
So then we can see that:
$$ \begin{bmatrix} 1 & -\frac{x}{2} & \frac{x^2}{3!} & -\frac{x^3}{4!} & ... \\ ? & ? & ? & ? & ... \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} 1 & x & 0 & \dots \\ 0 & 2 & x & \dots \\ 0 & 0 & 3 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & ... \\ ? & ? & ? & ? & ... \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} $$
Now at this point we don't even have to FINISH finding that inverse matrix since ALL we wanted was the top row and we have the top row. So it's easy now to read off that top row as follows:
$$ O^{-1} [f] = f - \frac{x}{2}f' + \frac{x^2}{3!}f'' - \frac{x^3}{4!}f''' ... $$ is the inverse operator we are looking for.
An Example With Integration:
Suppose $O[f] = f'$ then
$$ V(\mathcal{O}) = \begin{bmatrix} 0 & 1 & 0 & \dots \\ 0 & 0 & 1 & \dots \\ 0 & 0 & 0 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} $$
Inverting this will be hopeless since left most column is all 0's (what are you possibly gonna dot product that column with to make it non zero?). So we go ahead and add an integral row
$$ V(\mathcal{O})_{\text{with integral addition}} = \begin{bmatrix} 1 & 0 & 0 & \dots \\ \hline 0 & 1 & 0 & \dots \\ 0 & 0 & 1 & \dots \\ 0 & 0 & 0 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} $$
And we find its "inverse" (this is really a generalized inverse since these aren't square matrices anymore)
$$ \left[\begin{array}{r|rrr} 1 & 0 & 0 & ... \\ 0 & 1 & 0 & ... \\ 0 & 0 & 1 & ... \\ \vdots & \vdots & \vdots & \ddots \end{array}\right] $$
So that:
$$ \left[\begin{array}{r|rrr} 1 & 0 & 0 & ... \\ 0 & 1 & 0 & ... \\ 0 & 0 & 1 & ... \\ \vdots & \vdots & \vdots & \ddots \end{array}\right] \begin{bmatrix} 1 & 0 & 0 & \dots \\ \hline 0 & 1 & 0 & \dots \\ 0 & 0 & 1 & \dots \\ 0 & 0 & 0 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} = \left[\begin{array}{r|rrr} 1 & 0 & 0 & ... \\ \hline 0 & 1 & 0 & ... \\ 0 & 0 & 1 & ... \\ \vdots & \vdots & \vdots & \ddots \end{array}\right] $$
And the Right hand side now is the identity matrix but with an extra row and column (for the integral term).