Innovation error covariance decreasing but state error covariance inceasing

33 Views Asked by At

I am trying to implement Kalman Filter to estimate some random variables. I see that for the system I am using, the innovation error is zero for all times and the innovation error covariance matrix is getting smaller over time. But the state error covariance matrix keeps getting bigger. It seems pretty intriguing to me and I am not able to find what I am doing wrong.

Does anyone know if it is possible to have decreasing innovation error covariance but increasing state error covariance for all states?

Here are the equations for prediction and measurement update. $a = 0.01, b = 0.1$ and $R_1(t), R_2(t)$ are rotation matrices which are functions of time, $t$ , $C_1 \in R^{3\times 1}, C_2 \in R^{3\times 1}$ and $s$ is a variable:

Prediction: $X_{k+1} = F_kX_k + \nu_k \ \ \ \ \ \ \ \ \ \ \nu_k\sim\mathcal{N}\left(0, Q\right)$

$F_k = \left[1.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \\ \ \ \ \ \ \ \ \ \ \ 0.0 \ \ 1.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \\ \ \ \ \ \ \ \ \ \ \ 0.0 \ \ 0.0 \ \ 1.0 \ \ 0.0 \ \ 0.0 \ \ aR_1[0][0] \ \ aR_1[0][1] \ \ aR_1[0][2] \ \ 0.0 \ \ -aC_1[2] \\ \ \ \ \ \ \ \ \ \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 1.0 \ \ 0.0 \ \ aR_1[1][0] \ \ aR_1[1][1] \ \ aR_1[1][2] \ \ aC_1[2] \ \ 0.0 \\ \ \ \ \ \ \ \ \ \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 1.0 \ \ aR_1[2][0] \ \ aR_1[2][1] \ \ aR_1[2][2] \ \ -aC_1[1] \ \ aC_1[0] \\ \ \ \ \ \ \ \ \ \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ R_2[0][0] \ \ R_2[0][1] \ \ R_2[0][2] \ \ bR_1[1][0] \ \ -bR_1[0][0] \\ \ \ \ \ \ \ \ \ \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ R_2[1][0] \ \ R_2[1][1] \ \ R_2[1][2] \ \ bR_1[1][1] \ \ -bR_1[0][1] \\ \ \ \ \ \ \ \ \ \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ R_2[2][0] \ \ R_2[2][1] \ \ R_2[2][2] \ \ bR_1[1][2] \ \ -bR_1[0][2] \\ \ \ \ \ \ \ \ \ \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 1.0 \ \ 0.0 \\ \ \ \ \ \ \ \ \ \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 1.0\right]$

Observations($z\in R^{3 \times 1}$):

$z = \left[sC_2[0] + sX[5] + C_2[0]X[0] + X[5]X[0] \\ \ \ \ \ \ \ \ \ sC_2[1] + sX[6] + C_2[1]X[0] + X[6]X[0] \\ \ \ \ \ \ \ \ \ sC_2[2] + sX[7] + C_2[2]X[0] + X[7]X[0]\right]$

Update: $z_k = H_kX_k + \eta_k \ \ \ \ \ \ \ \ \ \ \eta_k\sim\mathcal{N}\left(0, R\right) \\ H_k = \left[C_2[0]+X[5] \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ s+X[0] \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \\ \ \ \ \ \ \ \ \ \ \ C_2[1]+X[6] \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ s+X[0] \ \ 0.0 \ \ 0.0 \ \ 0.0 \\ \ \ \ \ \ \ \ \ \ \ C_2[2]+X[7] \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ 0.0 \ \ s+X[0] \ \ 0.0 \ \ 0.0\right]$

$X[0]$ may be unobservable. But I am interested in the variance of $X[5]$ and it diverges with time. The values of $R$ and $Q$ that I am using are: $Q = \text{diag}\left(\left[1e-4, 1e-4, 1e-6, 1e-6, 1e-6, 1e-6, 1e-6, 1e-6, 1e-8, 1e-8\right]\right)$

$R = \text{diag}\left(\left[1e-6, 1e-6, 1e-6\right]\right)$