What does the Kalman filter generally converge to? And why?

1.7k Views Asked by At

So, i'm guessing whomever shows up here, knows what the Kalman filter is. It's quite an extensive model to type out, so here is an explanation from MIT (see ch. 11.5)

We have a feeling that it converges to the observations, but we don't know how to show this. Can anyone help us out?

1

There are 1 best solutions below

0
On BEST ANSWER

Kalman filter is a state observer, but an optimal one which minimizes the variance of the error signal. A state observer estimates the states using the information of the outputs and internal system model. Generally, this is done by feeding back the error between the actual system output and predicted system output with a gain, a so-called Luenberger observer. The equations look like this:

$$\begin{align} x_{k+1} &= A x_k + B u_k \\ \hat{x}_{k+1} &= A \hat{x}_k + B u_k + L (z_k - H \hat{x}_k) \end{align}$$

which makes the error dynamics $e_k := x_k - \hat{x}_k$,

$$ e_{k+1} = (A-LH) e_k $$

So, whenever the matrix $A-LH$ is stable, the error goes to $0$, hence successful estimation.

In the case of Kalman filter, error is a random process and its variance is minimized, i.e. the error is accumulated around $0$ mean as much as possible, along with stabilization. The idea is similar to above but also involves solving a Riccati equation to find the optimum value.

To answer your question, you need to find the error dynamics and show that its system matrix is stable with the given Kalman gain. Then, you can say that the estimated states converge to the actual system states.