Alternative proofs for the higher product rule (differentiation)

68 Views Asked by At

Let $f,g$ be smooth functions.

I'm looking for proofs for the formula $$ \left(\frac d {dx}\right)^n f(x)\cdot g(x) = \sum_{i=0}^n \binom{n}{ i} f^{(i)}(x)\cdot g^{(n-i)}(x) $$ , where $f^{(i)}$ denotes the $i$-th derivation of $f$.

The classical proof uses induction, and runs analogue to the proof of the binomial theorem. It can be found here: Proof 1


The second proof that is easily possible is by creating a bijection between $$ \binom{\{1,..,n\}}{k} = \{M\subseteq \{1,..,n\}\mid |M| =k\} $$ and the ways to create a single term $f^{(k)}(x)\cdot g^{(n-k)}(x)$.

The argument is roughly this:
We can view each set $M\in \binom{\{1,..,n\}}{k} $ as list of all the times we derived $f$ instead of $g$:

Say e.g. $M =\{1,2,4\}\in \binom{\{1,..,4\}}{3}$, i.e. especially $n=4, k=3$.
Then that 'traces' the path

$\begin{align*} &f^{(0)}(x)\cdot g^{(0)}(x)\\ \to &f^{(1)}(x)\cdot g^{(0)}(x)\\ \to &f^{(2)}(x)\cdot g^{(0)}(x)\\ \to &f^{(2)}(x)\cdot g^{(1)}(x)\\ \to &f^{(3)}(x)\cdot g^{(1)}(x) \end{align*}$

Each such path tracks one summand in the expanding of $\left(\frac d {dx}\right)^n f(x)\cdot g(x)$ if we don't simplify the terms (i.e. add up terms of the same type).
One then has only to show that the paths are injective and surjective.


Proof 3:

We define $M:=\{f:\mathbb R \to \mathbb R \mid f \text{ smooth}\}$.

Using this set, we now define the polynomial ring $\mathbb R[M]$. In other words, we look at the functions now as nothing more than formal objects, equipped with a commutative and associative addition.

The idea is now this: Instead of looking at the product $f\cdot g$, we look at formal pairs $(f,g)$, which we interpret as a multiplication of its elements. On these we then define a differentiation, construct a recursion, and then solve it.

First, we formalize the differentiation-operator to a function $\partial:\mathbb R[M]^2 \to \mathbb R[M]^2$ (we'll only formalize the parts of it that we need).

For this, we define: $$ \partial_l:\mathbb R[M]^2\to \mathbb R[M]^2,\qquad (f^{(i)},g^{(j)})\mapsto (f^{(i+1)},g^{(j)}) \\ \partial_r: \mathbb R[M]^2\to \mathbb R[M]^2 ,\qquad (f^{(i)},g^{(j)})\mapsto (f^{(i)},g^{(j+1)}) $$

I.e. $\partial_l$ is the differentiates the left element of the tuple, and $\partial_r$ the right element of the tuple.

On the pairs $\mathbb R[M]^2$ we define a commutative & associate addition as well, which though can only be simplified if both summands are identical: $$ (f^{(i)},g^{(j)}) +(f^{(i)},g^{(j)}) = 2 (f^{(i)},g^{(j)}) \\ (f^{(i)},g^{(j+1)}) + (f^{(i)},g^{(j)}) = (f^{(i)},g^{(j+1)}) + (f^{(i)},g^{(j)}) $$

Now $\partial$ is simply: $$ \partial = \partial_l + \partial_r $$

One can show, that $\partial$ is linear, and that $\partial_l,\partial_r,\partial$ all are commuting with each other.

If we define $\partial^n$ as applying $\partial$ $n$ times, the original problem now is: $$ \partial^n (f,g) = \sum_{i=0}^n \binom{n}{ i} \partial_l^i \partial_r^{n-i} (f,g) $$

We can use the definition of $\partial$ to formulate the recursion: $$ \begin{align*} \partial^n (f,g) &= \partial^{n-1} (\partial_l(f,g)+\partial_r(f,g)) = \\ &= \partial_l\partial^{n-1}(f,g) + \partial_r\partial^{n-1}(f,g) \\ &=(\partial_l + \partial_r)\partial^{n-1}(f,g) \end{align*} $$

This is equivalent to $$ \partial^n (f,g) -(\partial_l + \partial_r)\partial^{n-1}(f,g) = 0 $$

This leads to the closed formula of the generating function $\sum_{n\ge 0} a_n x^n$: $$ \sum_{n\ge 0} a_n x^n = \frac{1}{1-(\partial_l + \partial_r)x} $$

It is now easy to obtain $a_n = (\partial_l + \partial_r)^n (f,g)$, which is what we wanted to show, albeit in a totally different notation.

(This proof is still far from perfect, and any improvements/suggestions are welcome)


What are other proofs to show the higher product rule?