Imagine I have a discrete-time, discrete-space Markov chain with some transition matrix $B$ and stationary distribution $\pi$. If I know what state I'm in at some time $t$, how can I calculate $p ( X_{t-1} \mid X_t)$, the probability that I came from each possible state?
To give concreteness, let's say:
$$ B = \begin{bmatrix} 0.6 & 0.2 & 0.1 & 0.1 \\ 0.2 & 0.5 & 0.2 & 0.1 \\ 0.1 & 0.1 & 0.5 & 0.3 \\ 0.1 & 0.2 & 0.3 & 0.4 \end{bmatrix} $$ $$ \pi = \begin{bmatrix} 0.2491 & 0.2454 & 0.2821 & 0.2234 \end{bmatrix} $$ $$ X_t = \begin{bmatrix} 0 & 1 & 0 & 0 \end{bmatrix} $$
How do I calculate $p(X_{t-1}\mid X_t)$?