Defining a Markov Chain with Infinite States

67 Views Asked by At

I am trying to learn more about Discrete Time Markov Chains with Infinite State Spaces.

I am familiar with Discrete Time Markov Chains with Finite State Spaces - now, I am trying to learn about them with Infinite State Spaces.

I spent sometime thinking about how to even define Discrete Time Markov Chains with Infinite State Spaces. Here are some ideas I came up with on my own:

Example 1: $S_{t+1}$ is a Discrete Time Markov Chains with Infinite State Space

$$P(S_{t+1} = S_t + 1 | S_t = S_t) = 0.5$$

$$P(S_{t+1} = S_t - 1 | S_t = S_t) = 0.5$$

$$ S_t = 0 \quad \ \text{, When t=0}\ $$

$$S_t, S_{t+1}, t \in \mathbb{Z}$$

Example 2: {${X_t, S_t}$ }is a Discrete Time Markov Chains with Infinite State Space

Part 1:

  • Suppose there are 3 States $S_1$,$S_2$, $S_3$
  • Each State is associated with a probability vector (i.e. in terms of $c_i$)
  • Note that $\sum_{i=1}^{3} c_i = 1$, $\sum_{i=4}^{6} c_i = 1$, $\sum_{i=7}^{9} c_i = 1$
  • And $0 < c_1, c_2, c_3, c_4, c_5, c_6, c_7, c_8, c_9 < 1$

$$\mathbf{S1} = [ c_1, c_2 , c_3]$$ $$\mathbf{S2} = [ c_4, c_5 , c_6]$$ $$\mathbf{S3} = [ c_7, c_8 , c_9]$$

Part 2:

  • The probability of being in $S_1$,$S_2$, $S_3$ at time $t$ depends on the state at time $t-1$ (given by a fixed set of probabilities):

$$ P(S(t) = S_i | S(t-1) = S_j) = \begin{bmatrix} p_{11} & p_{12} & p_{13} \\ p_{21} & p_{22} & p_{23} \\ p_{31} & p_{32} & p_{33} \end{bmatrix}$$

  • Note that $\sum_{j=1}^{3} p_{1,j} = \sum_{j=1}^{3} p_{2,j} = \sum_{j=1}^{3} p_{3,j} = 1$
  • And $0 < p_{ij} < 1 \quad \text{for all } i,j$

Part 3:

  • And finally, we define $X_{t+1}$ as:

$$X_{t+1} = \begin{cases} X_t + 0.05X_t & \text{with probability } S_t(1) \\ X_t - 0.05X_t & \text{with probability } S_t(2), \\ X_t & \text{with probability }S_t(3)). \end{cases}$$

$$X_{t+1}, X_{t} \in \mathbb{R}$$

E.g. if at time $t$ we are in State $S_3$, then $S_t(1) =c_7$, $S_t(2) = c_8$, $S_t(3) = c_9$

My Question:

  • Have I understood this correctly - are both examples Discrete Time Markov Chains with Infinite State Spaces?
  • Suppose we take $X_{t}, S_t$ . Is it possible to answer questions such as : If $X_{t}$ currently has a value of $g_1$, how long on average will it take before $X_{t}$ exceeds a value of $g_1 + g_2$ for the first time? (i.e. time to absorption)

Thanks!