I have a trajectory made of several state vectors $\mathbf{x}_n$ (position and speed). One step forward in time is done with : $$\mathbf{x}_{n+1} = M_n\mathbf{x}_n + q_n$$ where $M_n$ is a matrix and $q_n$ is the model uncertainty, unbiased noise ( $E(q_n) = 0$ ).
In addition, the trajectory is tracked so that one gets observation $\mathbf {y}_n$ at each time, which are related to the true vector by : $$\mathbf{y}_n = H\mathbf{x}_n +r_n = \mathbf{x}_n + r_n$$ where I assume $H$ is the identity matrix, that is to say the measurement gives directly the position and speed, with some noise $r_n$ ( $E(r_n)= 0$ ).
In the sequel, $\mathbf{x}_n^b$ and $\mathbf{x}_n^a$ denote the background and analysis estimation of true vector $\mathbf{x}_n$ : $$\mathbf{x}_n^b = M_n \mathbf{x}_{n-1}^a$$
The background covariance error is given by : $$P_{n}^b = E[(\mathbf{x}_n^b - \mathbf{x})(\mathbf{x}_n^b-\mathbf{x})^T] = M_nP_{n-1}^aM_n^T + Q_n$$ where $P_{n-1}^a$ is the analysis covariance error (defined in the same way) at previous time and $Q_n$ is the covariance of vector $q_n$.
According to optimal interpolation (Kalman filter) the new analyzed state is given by $$\mathbf{x}_n^a = \mathbf{x}_n^b + K_n (\mathbf{y}_n - \mathbf{x}_n^b)$$ where $K_n$ is the gain matrix : $K_n = P_n^b(P_n^b + R_n)^{-1}$, in which I used $H=I$.
$R_n$ is the observation covariance error ($R_n = E[r_n r_n^T]$).
Finally, the new analysis covariance error is $P_n^a = P_n^b - K_nP_n^b$ (again $H=I$)
My issue is that I have to interpolate the trajectory between two times.
Say that the trajectory is observed at time $n-1$ and $n+1$, how can I reconstruct an analysis at time $n$ ? I would like to compute $\mathbf{x}_n^a$ and $P_n^a$ with the information $\mathbf{y}_{n-1}$ and $\mathbf{y}_{n+1}$.
My first attempt is to consider the following background at $n+1$ : $$\mathbf{x}_{n+1}^b = M_{n+1}M_n\mathbf{x}_{n-1}^a\triangleq M_n'\mathbf{x}_{n-1}^a$$ and then replace $M_n$ by $M_n'$ in previous formulas so as to get an analysis in $n+1$ and make a simple linear interpolation between $n-1$ and $n+1$.
My second attempt is to consider the observation $\mathbf{y}_{n+1}$ as if it was obtained at time $n$ and then, to compare it with $H\mathbf{x}_n^b$ where the obervation operator $H$ is in fact the model operator, $H= M_{n+1}$.
Based on this, one can develop an expression for the analysis at time $n$ :
$$\mathbf{x}_n^a = \mathbf{x}_n^b +K_n' (\mathbf{y}_{n+1} -M_{n+1}\mathbf{x}_n^b)$$
with the gain $K_n' = P_n^bM_{n+1}^T(M_{n+1}P_n^bM_{n+1}^T +R_n)^{-1}$ which can be developed in : $$K_n' = (M_nP_{n-1}^aM_n^T + Q_n)M_{n+1}^T(M_n'P_{n-1}^a(M_n')^T + M_n'Q_n(M_n')^T +R_n)^{-1}$$
and the analysis covariance error : $P_n^a = P_n^b -K_n'M_{n+1}P_n^b$
Which approach seems best to you ? What else if none ?
Thanks for your attention.
Regards.
To keep my answer also valid for the more general case I will use
\begin{align} x_{n+1} &= M_n\,x_n + q_n \\ y_n &= H_n\,x_n + r_n \end{align}
with $E(q_n) = 0$, $E(r_n) = 0$, $E(q_n\,q_n^\top) = Q_n$, $E(r_n\,r_n^\top) = R_n$, $E(q_n\,r_m^\top) = 0$, $E(r_n\,r_m^\top) = 0\ \forall\ m \neq n$ and $E(q_n\,q_m^\top) = 0\ \forall\ m \neq n$. For the estimate of the state I will use the notation $\hat{x}_{n|m}$ which means the estimate of $x_n$ given the information from $y_m$. Since $y_{n-1}$ is known I will assume that $\hat{x}_{n-1|n-1}$ has been calculated together with its related covariance $P_{n-1|n-1} = E(e_{n-1|n-1}\,e_{n-1|n-1}^\top)$, with $e_{n-1|n-1} = x_{n-1} - \hat{x}_{n-1|n-1}$. Now since $y_n$ is not known one can initially only perform prediction steps
\begin{align} \hat{x}_{n|n-1} &= M_{n-1}\,\hat{x}_{n-1|n-1} \\ \hat{x}_{n+1|n-1} &= M_{n}\,\hat{x}_{n|n-1} \\ &= M_{n}\,M_{n-1}\,\hat{x}_{n-1|n-1} \end{align}
I think the correct way one can improve the predicted value for $x_n$ once you obtain $y_{n+1}$ is by using
\begin{align} \hat{x}_{n|n+1} &= \hat{x}_{n|n-1} + L \left(y_{n+1} - H_{n+1}\,\hat{x}_{n+1|n-1}\right) \\ &= M_{n-1}\,\hat{x}_{n-1|n-1} + L \left((H_{n+1}\,x_{n+1} + r_{n+1}) - H_{n+1}\,(M_{n}\,M_{n-1}\,\hat{x}_{n-1|n-1})\right) \\ &= \left(M_{n-1} - L\,H_{n+1}\,M_{n}\,M_{n-1})\right)\hat{x}_{n-1|n-1} + L \left(H_{n+1}\,(M_n\,x_n + q_n) + r_{n+1}\right) \\ &= \left(I - L\,H_{n+1}\,M_{n})\right)M_{n-1}\,\hat{x}_{n-1|n-1} + L \left(H_{n+1}(M_n\,(M_{n-1}\,x_{n-1} + q_{n-1}) + q_n) + r_{n+1}\right) \end{align}
The error defined as $e_{n|n+1} = x_n - \hat{x}_{n|n+1}$ then becomes
\begin{align} e_{n|n+1} &= M_{n-1}\,x_{n-1} + q_{n-1} - \left[\left(I - L\,H_{n+1}\,M_{n})\right)M_{n-1}\,\hat{x}_{n-1|n-1} + L \left(H_{n+1}(M_n\,(M_{n-1}\,x_{n-1} + q_{n-1}) + q_n) + r_{n+1}\right)\right] \\ &= \left(I - L\,H_{n+1}\,M_{n})\right)M_{n-1}\left(x_{n-1} - \hat{x}_{n-1|n-1}\right) + q_{n-1} - L\,H_{n+1}\,M_n\,q_{n-1} - L\,H_{n+1}\,q_n - L\,r_{n+1} \\ &= \left(I - L\,H_{n+1}\,M_{n})\right)M_{n-1}\,e_{n-1|n-1} + \left(I - L\,H_{n+1}\,M_n\right)q_{n-1} - L\,H_{n+1}\,q_n - L\,r_{n+1} \end{align}
Since it is assumed that the expected value of the covariance of all the cross terms are zero, the expected value of $e_{n|n+1}\,e_{n|n+1}^\top$ can be simplified to
\begin{align} P_{n|n+1} &= E(e_{n|n+1}\,e_{n|n+1}^\top) \\ &= \left(I - L\,H_{n+1}\,M_{n}\right)M_{n-1}\,P_{n-1|n-1}\,M_{n-1}^\top\left(I - L\,H_{n+1}\,M_{n}\right)^\top + L\,R_{n+1}\,L^\top + L\,H_{n+1}\,Q_n\,H_{n+1}^\top L^\top + \left(I - L\,H_{n+1}\,M_n\right)Q_{n-1}\left(I - L\,H_{n+1}\,M_n\right)^\top \end{align}
An optimal $L$ can now be chosen by setting the partial derivative of $P_{n|n+1}$ with respect to $L$ to zero, which gives
$$ \frac{\partial P_{n|n+1}}{\partial L} = -2\left(I - L\,H_{n+1}\,M_{n}\right)M_{n-1}\,P_{n-1|n-1}\,M_{n-1}^\top M_{n}^\top H_{n+1}^\top + 2\,L\,R_{n+1} + 2\,L\,H_{n+1}\,Q_n\,H_{n+1}^\top - 2\left(I - L\,H_{n+1}\,M_n\right)Q_{n-1}\,M_n^\top H_{n+1}^\top = 0 $$
solving this for the optimal $L$ gives
$$ L = \bar{P}\,M_n^\top H_{n+1}^\top \left(R_{n+1} + H_{n+1}\left(Q_n + M_{n}\,\bar{P}\,M_{n}^\top\right)H_{n+1}^\top\right)^{-1}, $$
with
$$ \bar{P} = M_{n-1}\,P_{n-1|n-1}\,M_{n-1}^\top + Q_{n-1}. $$