Consider a Markov chain $(X_0,X_1,\ldots)$ with a state space $S\equiv\{s_1,s_2\}$ and the following matrix of “transition probabilities” (I will explain the use of quotation marks below): \begin{align*} \begin{array}{c|cc} &s_1&s_2\\ \hline s_1&1&0\\ s_2&1&0 \end{array} \end{align*} That is, no matter what initial state the system starts in, it will always end up in state $s_1$ in one period and stay there forever.
Rigorously speaking, these “transition probabilities” are to be interpreted as follows: \begin{align*} \mathbb P\,(X_{n}=s_1\,|\,X_{n-1}=s_1)=&\,1,\\ \mathbb P\,(X_{n}=s_2\,|\,X_{n-1}=s_1)=&\,0,\\ \mathbb P\,(X_{n}=s_1\,|\,X_{n-1}=s_2)=&\,1,\\ \mathbb P\,(X_{n}=s_2\,|\,X_{n-1}=s_2)=&\,0 \end{align*} for each $n\in\mathbb N$.
My concern is that the last two probabilities are ill-defined (except possibly for $n=1$), because for any given initial probabilities, the condition events $\{X_{n-1}=s_2\}_{n=2}^{\infty}$ have zero probability! Strictly speaking, therefore, the above matrix cannot be interpreted as conditional probabilities because of the problem of conditioning on events that never occur.
What is the standard resolution of this technical problem? Does one make the hand-waving assumption of defining conditional probabilities that depend on impossible events anyway, or is there a more sophisticated and rigorous way around this issue?
Any input is appreciated.
In fact one defines the transition kernels: $$K(\omega, j) = \Bbb{E}(1_{X_n = j} \mid \mathcal{F}_{n-1})(\omega) \quad \Bbb{P } \,a.s. $$
This means that we have a regular conditional probability that allows us to talk about the jumps of our process.
The markov property consists in saying that $K(\cdot, j)$ is $\sigma(X_{n-1})$ measurable
that is
$$K(\omega, j) = \phi_j(X_{n-1})(\omega) \quad \Bbb{P}\, a.s. $$
Hence there is no hand waving.