Which State Will the Markov Chain Go To Next?

326 Views Asked by At

In a classical Markov Chain problem, (I think) it is relatively straightforward to find out the next state will the Markov Chain go to next. For example:

  • If this Markov Chain is currently in State 1, this Markov Chain will most likely go State 1
  • If this Markov Chain is currently in State 2, this Markov Chain will most likely go to State 2
  • If this Markov Chain is currently in State 3, this Markov Chain will most likely go to State 1

I am interested in extending this logic to the (homogenous) Markov Process as defined below (https://www.jstatsoft.org/article/view/v066i06):

Let us consider a Markov renewal process $(J_n,T_n)_{n\in\mathbb{N}}$ where $0 = T_0 < T_1 < \dots < T_n < \infty$ are the successive times of entry to states $J_0, J_1, \dots, J_n$ where $J_n \neq J_{n+1}$ for all $n \in \mathbb{N}$. The sequence $(J_n)_{n\in\mathbb{N}}$ is an embedded homogeneous Markov chain taking values in a discrete finite state space $E$ with transition probabilities $p_{h,j} = \mathbb{P}({J_{n+1} = j}\mid{J_n = h})$, $n \in \mathbb{N}$. Let $S_n = T_n − T_{n−1}$ be the inter-arrival time for all $n \in \mathbb{N}$.

For $d \in \mathbb{R}^+$ and $(h, j) \in E \times E$, the Markov renewal kernel $Q_{h,j}(d)$ satisfies \begin{align*} Q_{h,j}(d) &= \mathbb{P}({J_{n+1} = j}, {S_{n+1} \leq d}\mid J_0, \dots, {J_n = h}, S_1, \dots, Sn) \\ &= \mathbb{P}({J_{n+1} = j}, {S_{n+1} \leq d}\mid{J_n = h})\text{.} \tag{1} \end{align*} Let $N(t) = \sup{\{n \in \mathbb{N} : T_n \leq t, t \in \mathbb{R}^+\}}$ be the counting process which counts the total number of observed transitions during the time interval $[0, t]$. The process $J_{N(t)}$, which represents the state of the process at time t, defines a homogeneous semi-Markov process.The probability distribution function of sojourn times is related to the semi-Markov kernel through the transition probabilities of the embedded Markov chain, $F_{h,j}(d) = \mathbb{P}({S_{n+1} \leq d}\mid{J_n = h},{J_{n+1} = j}) = Q_{h,j}(d)p_{h,j}$.

Here, we are shown the probability distribution function of transitioning to some state $j$, provided that we have spent some time $d$ in some state $h$. I have the following question:

  • After this Markov Process has spent $d$ amount of time in the current state $j$ - what is the most probable "non-$j$ state" that this Markov Process will eventually transition to? Is it possible to find this most probable "non-$j$ state" when $d$ (i.e. the time spent in state $j$) approaches infinity?

I was wondering if these questions have closed-form solutions, or if they have to be answered using simulation procedures. Is there some equivalent of the Kolmogorov-Chapman Equation that can be applied for Continuous Time Markov Chains and Semi-Markov Chains?

Thank you!

1

There are 1 best solutions below

0
On

Define these events for convenience:

  • $A = (J_n = h)$

  • $B = (S_{n+1} > d)$

  • $C_j = (J_{n+1} = j)$

Conditioned on $J_n = h$ and $S_{n+1} > d$ (i.e. already waited $d$ time in $h$), the probability of going to state $j$ next is:

$$ \begin{array}{ccc} p_j &=& P(C_j \mid A \cap B) \\ &=& P(C_j \cap A \cap B) / P(A \cap B) \\ &=& P(C_j \cap B \mid A) \times P(A) / P(A \cap B) \\ P(C_j \cap B \mid A) &=& P(J_{n+1} = j \cap S_{n+1} > d \mid J_n = h) \\ &=& Q_{h,j}(\infty) - Q_{h,j}(d) \end{array}$$

Since the terms $P(A)/P(A\cap B)$ are independent of $j$, this means $p_j$ is maximized when the last expression is maximized. So the most probable state is

$$\arg\max_j (Q_{h,j}(\infty) - Q_{h,j}(d))$$

I think the above assumes some transition eventually happens with probability $1$. Not sure how I would deal with it if that's not the case... but the answer shouldn't change if you simply condition on some transition happening eventually (as your question seems to assume since it asks about the next state).


As to the $d \to \infty$ question: That depends on what your $Q$ looks like. It is certainly possible that the argmax has a limit, i.e. $\exists T\in \mathbb{R}, \exists k$ s.t. $\forall d > T,$ the argmax $=k,$ the limit value. However it is also certainly possible that the argmax has no limit, e.g. it oscillates between several values and never settles down to one value.