Is there a method for predicting/recovering/extracting the parameters in data generated from a stochastic differential equation?

81 Views Asked by At

Is there a method for predicting the parameters in data generated from a stochastic differential equation?

For example, lets say we have a damped spring which is driven by a Stochastic force.

The SDE describing this is as follows:

$$ \ddot{x}(t) + \Gamma_0 \dot{x}(t) + \omega_0^2 x(t) - C\dfrac{dW(t)}{dt} = 0 $$

Where W(t) is the Stochastic Weiner Process.

I can turn this into 2 first order differential equations:

Where $v = \dfrac{dx}{dt}$

$$ dx = v(t) dt \\ dv = [-\Gamma_0 v(t) -\omega_0^2 x(t)]dt + C dW(t)$$

and apply Euler-Maruyama or Runge Kutta to solve it and get an array of values for $x$ and $v$ for an array of times.

Is there a way of recovering / extracting the values of $\Gamma_0$ , $\omega_0$ and $C$ from the arrays of values of $x$ and $v$ with time (where these 3 parameters are just constants)?

$\omega_0$ can be accurately estimated simply by looking at the peak in the fft of the signal or by a number of other frequency space analysis techniques, if it is useful to limit prediction to just the $\omega_0$ and $C$ parameters.

1

There are 1 best solutions below

0
On BEST ANSWER

As you have already noted this is equivalent to the first-order system $$ \begin{align*} \begin{bmatrix} \dot{x} \\ \dot{v} \end{bmatrix} &= \begin{bmatrix} 0 & 1 \\ -\omega_0^2 & - \Gamma_0 \end{bmatrix}\begin{bmatrix} x \\ v \end{bmatrix} + \begin{bmatrix} 0 \\ u(t)\end{bmatrix}, \end{align*} $$ where $u(t)$ is our $\delta$-correlated white-noise process, your choice of normalisation may differ but lets say $$ \begin{align*} \langle u(t) \rangle = 0, \qquad \langle u(t) u(t') \rangle = C\delta(t-t'). \end{align*} $$ Since the SDE is linear we end up with an Ornstein-Uhlenbeck process and can solve for the transition probabilities, letting $\mathbf{x}_i = [x_i , v_i ]$, and the parameter vector be $\theta = [\omega_0^2 , \Gamma_0, C]$, and then use the Markov property to write out the joint probability we have \begin{align*} p(\mathbf{x}_1, \ldots, \mathbf{x}_n | \mathbf{x}_0, \theta ) = \prod_{i=1}^n p( \mathbf{x}_i | \mathbf{x}_{i-1}, \theta ), \end{align*} where \begin{align*} p(\mathbf{x_i} | \mathbf{x_{i-1} }) &= \mathcal{N}\left( \mathbf{\mu}_i(\theta), \Sigma_i(\theta) \right). \end{align*} The mean $\mathbf{\mu}_i$ is given by \begin{align*} \begin{bmatrix} \langle x \rangle_i \\ \langle v \rangle_i \end{bmatrix} = \exp\left( \begin{bmatrix} 0 & 1 \\ - \omega_0^2 & -\Gamma_0\end{bmatrix}(t_i - t_{i-1}) \right) \mathbf{x}_{i-1}, \end{align*} and each of the covariance matrices, $\Sigma_i$, are given by \begin{align*} \int_{t_{i-1}}^{t_i}\int_{t_{i-1}}^{t_i} \exp\left( \begin{bmatrix} 0 & 1 \\ - \omega_0^2 & -\Gamma_0\end{bmatrix}(t_i - x) \right) \begin{bmatrix} 0 & 0 \\ 0 & C \delta(x-y)\end{bmatrix} \exp\left( \begin{bmatrix} 0 & 1 \\ - \omega_0^2 & -\Gamma_0\end{bmatrix}^{T}(t_i - y) \right)dxdy. \end{align*} That will simplify further and using the spectral decomposition of the matrix, or ortherwise, you can write out explicit closed forms for each of the components of the conditional mean and covariances.

You can now use any form of maximum likelihood, Bayesian methods, etc. to get point parameter estimates or estimates of their distribution and so on.