How to fit ordinary differential equations to empirical data?

441 Views Asked by At

For some biological systems, there exists ordinary or partial differential equations that allow one to simulate their activity/behavior over time. Some of these models even produce data that is very difficult to tell apart from real data.

What I haven't been able to figure out is how were those equations found? Suppose I have some empirical time-series data, that has very little noise in it. How could I "fit" or find ODEs or PDEs that mimic them?

Are there any paper and pen based methods for this? Or is this something that you would do numerically; say measure the difference between the output of a given ODE and empirical data and optimize the parameters?

Thanks for any help!

1

There are 1 best solutions below

0
On BEST ANSWER

Considering a basic introductory example involving ODEs we can present the problem as.

Given the dynamic system

$$ \cases{ \dot x = f(x,t,\theta)\\ y = h(x,t,\theta) } $$

with initial conditions $x(0)=g(\theta)$

with $x = (x_1,\cdots,x_n), \ y = (y_1\cdots,y_m), \ \theta=(\theta_1,\cdots,\theta_p)$. Here $h()$ is the observation function and $\theta$ the unknown parameters. The measured data are the points $(t_k, \bar y_k), \{k = 1,\cdots, N\}$

Find

$$ \theta^* = \arg\min\cal{E}(\theta) $$

with

$$ \cal{E}(\theta) = \frac{1}{2}\sum_{j=1}^{N}\sum_{i=1}^{m}(\bar y_{i,j}-y_i(t_j,\theta))^2 $$

Methods using the steepest descent direction can be used to find $\theta^*$. Those methods use the error gradient direction or

$$ \Delta_{\theta} \cal{E}(\theta) = \sum_{j=1}^{N}\sum_{i=1}^{i=m}(\bar y_{i,j}-y_i(t_j))\frac{\partial y_i(t_j,\theta)}{\partial\theta} $$

or

$$ \Delta_{\theta} \cal{E}(\theta) = \sum_{j=1}^{N}\sum_{i=1}^{i=m}(\bar y_{i,j}-h_i(x,t_j,\theta))\frac{\partial h_i(x,t_j,\theta)}{\partial\theta} $$

here the quantities

$$ \frac{\partial h_i(x,t_j,\theta)}{\partial \theta} $$

are calculated as follows.

$$ \begin{array}{ccl} \frac{\partial\dot x}{\partial\theta} & = & \frac{\partial f}{\partial x}\frac{\partial x}{\partial \theta}+\frac{\partial f}{\partial\theta}\\ \frac{\partial y}{\partial \theta} & = & \frac{\partial h}{\partial x}\frac{\partial x}{\partial \theta}+\frac{\partial h}{\partial \theta} \end{array} $$

now calling

$$ s^x_{\theta}=\frac{\partial x}{\partial \theta},\ \ s^y_{\theta}=\frac{\partial y}{\partial \theta} $$

we have

$$ \begin{array}{ccl} \dot s^x_{\theta} & = & \frac{\partial f}{\partial x}s^x_{\theta}+\frac{\partial f}{\partial \theta}\\ s^y_{\theta} & = & \frac{\partial h}{\partial x}s^x_{\theta}+\frac{\partial h}{\partial \theta} \end{array} $$

having unknown initial conditions, then also

$$ s^x_{\theta}(0)=\frac{\partial g}{\partial \theta} $$

  • Case study. Consider the dynamic system

$$ \begin{array}{rcl} \dot v & = & c(v-\frac{1}{3}v^3+r) \\ \dot r & = & -\frac{1}{c}(v-a+b r) \\ y_1 & = & v \\ y_2 & = & r \end{array} $$

with $v(0)=v_0,\ r(0)=r_0$

We have $\theta=\{a,b,c,v_0, r_0\}$. $x=\{x_1,x_2\}=\{v,r\}$, $\theta=\{\theta_1,\dots,\theta_5\}$, $h_1 = x_1,\ \ h_2 = x_2$ and $x_1(0)=\theta_4,\ x_2(0)=\theta_5$, $y=\{y_1,y_2\}$

then

$$ \frac{\partial f}{\partial x} = \left( \begin{array}{cc} \theta _3 \left(1-x_1^2\right) & \theta_3 \\ -\frac{1}{\theta_3} & -\frac{\theta_2}{\theta_3} \\ \end{array} \right) $$

$$ \frac{\partial f}{\partial \theta} = \left( \begin{array}{ccccc} 0 & 0 & -\frac{1}{3} x_1^3+x_1+x_2 & 0 & 0 \\ \frac{1}{\theta_3} & -\frac{x_2}{\theta _3} & \frac{-\theta_1+x_1+\theta _2 x_2}{\theta_3^2} & 0 & 0 \\ \end{array} \right) $$

$$ \frac{\partial y}{\partial x} = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right) $$

$$ \frac{\partial y}{\partial \theta} = \left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array} \right) $$

The procedure to obtain the error gradient is as follows:

  1. Given a vector of parameters $\theta_k$, integrate $x^k=x(t,\theta_k)$ , $y^k= y(t,\theta_k)$,$s^x_{\theta}(t,\theta_k)$ and $s^y_{\theta}(t,\theta_k)$
  2. Calculate $\Delta_{\theta} \cal{E}(\theta_k)$

The following set of DEs, solve the items (1,2)

$$ \begin{array}{rcl} x_1' & = & \theta_3 \left(-\frac{1}{3}x_1^3+x_1+x_2\right) \\ x_2' & = & -\frac{-\theta_1+\theta_2 x_2+x_1}{\theta_3} \\ \underset{1}{\overset{1}{s_x}}' & = & \theta_3\underset{1}{\overset{2}{s_x}}+\theta_3\underset{1}{\overset{1}{s_x}}\left(1-x_1^2\right) \\ \underset{2}{\overset{1}{s_x}}' & = & \theta_3\underset{2}{\overset{2}{s_x}}+\theta_3\underset{2}{\overset{1}{s_x}}\left(1-x_1^2\right) \\ \underset{3}{\overset{1}{s_x}}' & = & \theta_3\underset{3}{\overset{2}{s_x}}+\theta_3\underset{3}{\overset{1}{s_x}}\left(1-x_1^2\right)-\frac{1}{3} x_1^3+x_1+x_2 \\ \underset{4}{\overset{1}{s_x}}' & = & \theta_3\underset{4}{\overset{2}{s_x}}+\theta_3\underset{4}{\overset{1}{s_x}}\left(1-x_1^2\right) \\ \underset{5}{\overset{1}{s_x}}' & = & \theta_3\underset{5}{\overset{2}{s_x}}+\theta_3\underset{5}{\overset{1}{s_x}}\left(1-x_1^2\right) \\ \underset{1}{\overset{2}{s_x}}' & = & \frac{1}{\theta_3}-\frac{\underset{1}{\overset{1}{s_x}}}{\theta_3}-\frac{\theta_2\underset{1}{\overset{2}{s_x}}}{\theta_3}\\ \underset{2}{\overset{2}{s_x}}' & = & -\frac{\underset{2}{\overset{1}{s_x}}}{\theta_3}-\frac{\theta_2 \underset{2}{\overset{2}{s_x}}}{\theta_3}-\frac{x_2}{\theta_3}\\ \underset{3}{\overset{2}{s_x}}' & = & -\frac{\underset{3}{\overset{1}{s_x}}}{\theta_3}-\frac{\theta_2 \underset{3}{\overset{2}{s_x}}}{\theta_3}+\frac{-\theta_1+\theta_2 x_2+x_1}{\theta_3^2}\\ \underset{4}{\overset{2}{s_x}}' & = & -\frac{\underset{4}{\overset{1}{s_x}}}{\theta_3}-\frac{\theta_2 \underset{4}{\overset{2}{s_x}}}{\theta_3}\\ \underset{5}{\overset{2}{s_x}}' & = & -\frac{\underset{5}{\overset{1}{s_x}}}{\theta_3}-\frac{\theta_2 \underset{5}{\overset{2}{s_x}}}{\theta_3}\\ \end{array} $$

with initial conditions

$$ \begin{array}{c} x_1(0) = \theta_4, x_2(0) = \theta_5 \\ \underset{1}{\overset{1}{s_x}}(0) = 0,\underset{2}{\overset{1}{s_x}}(0) = 0,\underset{3}{\overset{1}{s_x}}(0) = 0,\underset{4}{\overset{1}{s_x}}(0) = 1,\underset{5}{\overset{1}{s_x}}(0) = 0\\ \\ \underset{1}{\overset{2}{s_x}}(0) = 0,\underset{2}{\overset{2}{s_x}}(0)= 0,\underset{3}{\overset{2}{s_x}}(0) = 0,\underset{4}{\overset{2}{s_x}}(0) = 0,\underset{5}{\overset{2}{s_x}}(0) = 1 \end{array} $$

There are many variants involving the smoothing process. We can utilize instead of minimum square error, other statistical error measures like the maximum likelyhood estimation, etc.

The following reference is a good step into the smoothing problem. Those smoothing problems involving DE's parameters determination are also known as inverse problems