Duhamel's principle for solving constant-coefficient linear ODE

1k Views Asked by At

From the Wiki article

Suppose we have a constant coefficient, m$^{\text{th}}$ order inhomogeneous ordinary differential equation.

$$P(\partial_t)u(t) = F(t) ,$$

$$ \partial_t^j u(0) = 0, \; 0 \leq j \leq m-1 $$

where

$$ P(\partial_t) := a_m \partial_t^m + \cdots + a_1 \partial_t + a_0,\; a_m \neq 0. $$

We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.

First let $G$ solve

$$P(\partial_t)G = 0, \; \partial^j_t G(0) = 0, \quad 0\leq j \leq m-2, \; \partial_t^{m-1} G(0) = 1/a_m. $$

Define $H = G \chi_{[0,\infty)} $, with $\chi_{[0,\infty)}$ being the characteristic function of the interval $[0,\infty)$. Then we have

$$P(\partial_t) H = \delta .$$

in the sense of distributions.

I don't understand this last line. Where does the $\delta$ function come from? Because $P(\partial_t)G$ is $0$ everywhere, so in particular, shouldn't $P(\partial_t) H=G \chi_{[0,\infty)}$ also be $0$ everywhere? Why are we concentrating all of its mass at $0$?

1

There are 1 best solutions below

2
On BEST ANSWER

Let $P(\partial t)$ be our differential operator, and $H$ be a solution in the sense of distributions of $P(\partial t)H(t)=\delta_0$. Consider now a "good" function $f(t)$ define $F = H\ast f$ - the convolution in the sense of distributions.

The differential equation satisfied by $F$ is easy to guess - by the properties of distributional convolution, $$P(\partial t) F(t) = P(\partial t) (H\ast f )= (P(\partial t) H)\ast f = \delta\ast f = f,$$ which allows you to solve the initial differential equation with an arbitrary "good" function $f$ on the right hand side (we omit the exact nature of those "good" functions).