Recognizing Markov Chains

55 Views Asked by At

I have only started taking Stochastic Processes class and we started discussing Markov Chains. In one of the examples we had to determine if the following two ALWAYS define a Markov Chain on $\mathbb{Z}:$

  1. $X_0 := Y_0, X_n := Y_1$ for all $n \geq 1$
  2. $X_0 := 0, X_n := X_{n-1} + Y_n +n$ for all $n \geq 1$

$(Y_n)$ is an iid sequence of integer-valued random variables.

I was sure that both of them are Markov Chains indeed since in case 2) it depends only on the previous state and in case 1) they are same. But my professor claims that they are not because in case

  1. $X_0, X_1$ are independent but $X_2 = X_1$, which implies that the transition probabilities change over time.
  2. The distribution of $X_{n+1}$ given $X_n$ changes over time.

I am a bit confuesed by how does the change over time of transition probabilities or distribution affectets Markov property? I would be extremly grateful if someone would be able to help me understand this in more details.

2

There are 2 best solutions below

0
On BEST ANSWER

In both cases, your professor is likely assuming an additional property: that the Markov chain is time-homogeneous. That is, that $\Pr[X_n = i \mid X_{n-1} = j]$ depends only on the states $i$ and $j$, and not on the time $n$.

It is very common, though not universal, to assume that Markov chains are time-homogeneous. The two chains fail to satisfy this because:

  • In the first case, $\Pr[X_1 = i \mid X_0 = i]$ is just $\Pr[Y_1 = i]$, but $\Pr[X_2 = i \mid X_1 = i]$ is $1$ for all $i$.
  • In the second case, it is harder to point to the precise violation, since it depends on the distribution of the $Y$'s, but the transition rule $X_n = X_{n-1} + Y_n + n$ is not time-homogeneous since it depends on $n$, which will lead to a non-homogeneous Markov chain one way or another.

(Additionally, though I wouldn't add this if I didn't see the other answer, some people assume that a Markov chain has a finite state space, and the second chain definitely does not. This is less common; Markov chains with infinite state spaces are just too useful.)

0
On

Markov chains are commonly modeled over a finite state variable (a variable that can have a discrete set values such as one of ${ A, B, C }$ ), with constant state transition probabilities. The distribution over the states will stabilize and become stationary over long time.

As for whether it's possible to model these sequences as Markov chains...

  1. $X_0 := Y_0, X_n := Y_1$ for all $n \geq 1$

This is a bit iffy... The problem is that usually you could define the distribution over the states beforehand, but this sequence essentially "locks itself in" to the value of $Y_1$, which isn't known beforehand. You could still define a well-defined distribution over multiple runs over the sequence, and the sequence "stabilizes" each run, but it's not guaranteed to stabilize to the pre-known distribution per run.

However, if we consider the Markov property to be defined like this:

$P(X_{n}=x_{n}\mid X_{n-1}=x_{n-1},\dots ,X_{0}=x_{0})=P(X_{n}=x_{n}\mid X_{n-1}=x_{n-1})$

...I don't se a huge problem there. After transitioning away from the initial state and getting the value $Y_1$, it just "copies" its previous value, which is representable with a Markov property. Transitioning away from the initial states is not a problem, as you can model that just with a finite chain of state transitions with probability $1.0$.

  1. $X_0 := 0, X_n := X_{n-1} + Y_n +n$ for all $n \geq 1$

This is definitely not a Markov chain. There are two problems: the state is not finite, but can grow without limit, and because of this, there is no stable distribution. Additionally, using $n$ is not acceptable, as it violates the no-memory property.