Is this "predictor" equivalent to the typical Kalman filter?

36 Views Asked by At

In "Introduction to Stochastic Control" by K. Astrom, page 228 Theorem 4.1, he introduces a state estimator for the discrete-time system: $$ \begin{aligned} x(t+1)&=\Phi x(t)+v(t)\\ y(t)&=\theta x(t)+e(t)\\ E[v(t)v(t)^T]&=R_1\\ E[v(t)e(t)^T]&=0 \\ E[e(t)e(t)^T]&=R_2 \end{aligned} $$ where $e,v$ are normally distributed with zero mean and the initial state $x(t_0)$ normally distributed with mean $m$ and covariance $P_0$. The theorem, attributed to Kalman, states (slightly adapted to be self-contained):

Theorem 4.1 (Kalman) The estimate $\hat{x}$ at time $t+1$ of $x$ based on $y(t_0),y(t_0+1),\dots,y(t)$ which minimizes $E[a^T(x(t+1)-\hat{x})]$ for arbitrary $a$, is the conditional mean $\hat{x}(t+1|t)$ which satisfies the recursive relation: $$ \begin{aligned} \hat{x}(t+1|t) &= \Phi\hat{x}(t|t-1)+K(t)(y(t)-\theta\hat{x}(t|t-1))\\ \hat{x}(t_0|t_0-1) &= m \\ K(t) &= \Phi P(t)\theta^T(\theta P(t)\theta+R_2)^{-1} \\ P(t) &=(\Phi-K(t)\theta)P(t)\Phi^T+R_1 \end{aligned} $$

My question:

What is the relation between this filter and THE Kalman filter?

The main difference I see, is that in the typical Kalman filtering setting, you want to estimate $x(t)$ given $y(t_0),\dots,y(t)$. However, in this case, we estimate $x(t+1)$ given $y(t_0),\dots,y(t)$, this is, we lack of the measurement at $t+1$. Thus, this estimator in Astrom's book is more a predictor. In addition, the evolution equations for $\hat{x},P$ seem to be different from those of the typical Kalman filter. If I understand this filter as different from the typical Kalman one, this makes sense.

However, Astrom presents this filter as if it was THE Kalman filter, which confuses me. This makes me think that there might be some relation between the predictor in the Kalman filter and the one Astrom presents.