A random state vector, $\mathbf{x}\left(t\right)$, evolves over time according to the following equation: \begin{equation} \dot{\mathbf{x}}\left(t\right) = \mathbf{f}\left(\mathbf{x}\left(t\right)\right) \end{equation} The state vector was initialized with a value of $\mathbf{x}\left(t_0\right)$ at time $t = 0$. However, this initial value is not known. The state vector was, first, observed at time $t = t_1$ and its value was measured as $\boldsymbol\mu\left(t_1\right)$. Let us ignore the measurement sensors noise characteristics. And assume that the only uncertainty in $\mathbf{x}\left(t_1\right)$ at time $t_1$ is due to its uncertainty at time $t=0$.
I want to get an estimate of the uncertainty(i.e. covariance matrix) at time $t_1$. Since we do not have any information about the values of $x\left(t\right)$ for $t < t_1$, we need to compute the covariance at $t = t_1$ by using information at time $t_1$ and beyond.
Let us assume that the evolution of the state vector is measured in a fixed frame called, world frame or w-frame. Here is how I am trying to estimate the uncertainty at time $t = t_1$:
- Initialize the covariance, $\boldsymbol\Sigma\left(t_1\right)$ at time $t = t_1$ aligned with w-frame.
- Create N samples of Monte Carlo (MC) points with mean $\boldsymbol\mu\left(t_1\right)$ and covariance $\boldsymbol\Sigma\left(t_1\right)$.
- Propagate these MC points from time $t_1$ to $t_1+\Delta t_1$ where $\Delta t_1 \rightarrow 0$.
- Compute the covariance, $\hat{\boldsymbol\Sigma}$, of MC points at time $t_1+\Delta t_1$.
- $\hat{\boldsymbol\Sigma}$ is, then, a good approximation of uncertainty in state vector $\mathbf{x}\left(t_1\right)$.
Is this the correct way to compute covariance at time $t_1$? Is there any better way to do it?