I have hidden Markov model transition matrix, $T$, defined as:
\begin{bmatrix} \lambda & 1 - \lambda \\\ 0 & 1 \end{bmatrix}
I know that each row, $j$, and column, $k$, of $T$ provides the condition probability of being in state $k$ at time $t$ given state $j$ at time $t - 1$, i.e. $T_{jk} = P(z_{t} = k \mid z_{t - 1} = j) = P(k_{t} \mid j_{t - 1})$.
I have fitted this model to some data that spans a number of years, where $T$ varies by year but each year as multiple observations. Therefore, there are $N$ data points over $M$ years, where $M < N$. I would like to calculate the marginal probabilities of being in either state at each year, $P(z_{y} = k) = P(k_{y})$. I know I can use dynamic programming algorithms to estimate the marginal probability of each state at each data point, $n$, but not the marginal probabilities of each state for each year, $y$, marginalizing over all the data.
Can I calculate the marginals from the transition matrix, $T$, directly using the law of total probability?
\begin{equation} P(k_{y}) = P(k_y \mid j_{y - 1}) P(j_{y - 1}) + P(k_y \mid k_{y - 1}) P(k_{y - 1}) \end{equation}
This is an iterative process. In pseudo-code:
// assume we start in state 1, k
// res holds the marginal P(k)
res = [1]
for(m in 1:(M - 1)){
p_kj = T[m, 1, 0]
p_kk = T[m, 0, 0]
p_k = res[m]
p_j = 1 - p_k
res[m + 1] = p_kj * p_j + p_kk * p_k
}
Is there a variation of the Viterbi of forward algorithm that I can use instead?