Just use the expected value for the random coefficient in a differential equation

189 Views Asked by At

We often encounter differential equations with some coefficients that are random variables. One way to solve these problems is to replace the random coefficient with its expected value (EV). Then we solve this problem and hope that this solution tells us enough about the 'mean solution' over all instances of the random coefficient. Of course this isn't true for all kind of differential equations and random variables.

Is there an easy counterexample for this procedure?
Thus we need

  • a differential equation, i.e. $\frac{\partial}{\partial t} \left(\lambda(t)x(t)\right)=0$,
  • with a random coefficient $\lambda(t)$ (that can depend on the variable for integration) with some distribution,
  • the expected value (EV) of the coefficient $E[\lambda](t)$,
  • the solution $x_E(t)$ with the EV plugged in i.e. $\frac{\partial}{\partial t} \left(\left(E[\lambda](t)\right)x_E(t)\right)=0$,
  • the solution of the stochastic differential equation $x(t,\lambda)$
  • the expected value of the solution $E[x](t)$ and finally
  • some differences $E[x](t)\neq x_E(t)$.

And it would be nice if it only uses math that is common to engineers.

Alternatively, a Matlab/Octave code snippet of a simulation could be convincing.


In the example above with $\frac{\partial}{\partial t} \left(\lambda(t)x(t)\right)=0$, initial condition $x(0)=1$ and $\lambda(t)=t+\mu(1-t)=\mu+t(1-\mu)$ with a uniformly distributed random variable $\mu\sim U[0,2]$ (uniformly between $0$ and $2$) we get $E[\lambda](t)=\int_0^2\frac{1}{2}(t+\mu(1-t))\operatorname{d}\mu=1$. If we plug in the EV we get $\frac{\partial}{\partial t} x(t)=0$ with the constant solution $x_E(t)=a$, and with the initial condition $x_E(t)=1$.

On the other hand we get with the product rule $\frac{\partial}{\partial t} \left(\lambda(t)x(t)\right)=\left(\frac{\partial}{\partial t} \lambda(t)\right)x(t)+\lambda(t)\left(\frac{\partial}{\partial t}x(t) \right)=0$ which we can reformulate to a linear ordinary differential equation (ODE) $\frac{\partial}{\partial t}x(t)=\frac{-(1-\mu)}{\mu+t(1-\mu)}x(t)$. We define $f(t)=\frac{-(1-\mu)}{\mu+t(1-\mu)}$ and its antiderivative $$F(t)=\int_0^t f(\tau)\operatorname{d} \tau=\int_0^t\frac{-(1-\mu)}{\mu+\tau(1-\mu)}\operatorname{d} \tau=-\int_{\mu}^{\mu+t(1-\mu)}\frac{1}{s}\operatorname{d}s=\ln\lvert\mu\rvert -\ln \lvert\mu+t(1-\mu)\rvert. $$ And get the solution of this ODE as $$x(t,\lambda)=b\exp\left(F(t)\right)=b\frac{\lvert\mu\rvert}{\lvert\mu+t(1-\mu)\rvert}.$$ Using the initial condition, we get $b=1$ for all $\lambda$. The expectation of this solution is $$E[x](t)=\int_0^2 \frac{1}{2}x(t,\lambda)\operatorname{d} \mu= \frac{1}{2}\int_0^2 \frac{\lvert\mu\rvert}{\lvert\mu+t(1-\mu)\rvert}\operatorname{d} \mu,$$ where we can skip the absolute value in the nominator but not and in the denominator, when we restrict $t\in [0,1)\cup(1,2]$. Thus, $$E[x](t)= \frac{1}{2}\int_0^2 \frac{\mu}{t+\mu(1-t)}\operatorname{d} \mu = \frac{1}{2(1-t)}\int_0^2 \frac{\mu}{\mu+\frac{t}{1-t}}\operatorname{d} \mu= \frac{1}{2(1-t)}\int_0^2 1-\frac{\frac{t}{1-t}}{\mu+\frac{t}{1-t}}\operatorname{d} \mu.$$ We get the expectation as $$E[x](t)= \frac{1}{2(1-t)}\left[\mu - \frac{t}{1-t}\ln\left\lvert\mu+\frac{t}{1-t}\right\rvert\right]_0^2= \frac{1}{2(1-t)}\left(2 - \frac{t}{1-t}\left(\ln\left\lvert2+\frac{t}{1-t}\right\rvert-\ln\left\lvert\frac{t}{1-t}\right\rvert\right)\right) = \frac{1}{(1-t)^2}\left((1-t) - \frac{1}{2}t\ln\left\lvert2\frac{1-t}{t}+1\right\rvert\right).$$ Or with help of WolframAlpha we get $$E[x](t)=-\frac{t+t\tanh^{-1}(1-t)-1}{(t-1)^2} \text{ for } t\in (0,1)\cup (1,2). $$ At this point it is hard to say that this example is easy to understand.