'Intuitive' difference between Markov Property and Strong Markov Property

17.9k Views Asked by At

It seems that similar questions have come up a few times regarding this, but I'm struggling to understand the answers.

My question is a bit more basic, can the difference between the strong Markov property and the ordinary Markov property be intuited by saying:

"the Markov property implies that a Markov chain restarts after every iteration of the transition matrix. By contrast, the strong Markov property just says that the Markov chain restarts after a certain number of iterations given by a hitting time $T$"?

Moreover, would this imply that with a normal Markov property a single transition matrix will be enough to specify the chain, whereas if we only have the strong property we may need $T$ different transition matrices?

Thanks everyone!

2

There are 2 best solutions below

5
On BEST ANSWER

A stochastic process has the Markov property if the probabilistic behaviour of the chain in the future depends only on its present value and discards its past behaviour.

The strong Markov property is based on the same concept except that the time, say $T$, that the present refers to is a random quantity with some special properties.

$T$ is called stopping time and it is a random variable taking values in $\{0,1,2,\ldots\}$ such that any value $T=n$ can be determined completely by the values of the chain, $X_0,X_1,\ldots ,X_n$, up to time $n$.

A very simple example is when you throw a coin and you want to stop when you reach $T=n$ heads. $T=n$ is completely determined by the values of the sequence of the previous tosses. Of course, $T$ is random.

The strong Markov property goes as follows. If $T$ is a stopping time, for $m\geq 1$

$$P(X_{T+m}=j\mid X_k=x_k,\;0\leq k <T;\;X_T=i)=P(X_{T+m}=j\mid X_T=i)$$

So conditionally on $X_T=i$ the chain again discards whatever happened previously to time $T$.

In order to determine the(unconditional) probabilistic behaviour of a(homogeneous) Markov chain at time $n$ one needs to know the one step transition matrix and the marginal behaviour of $X$ at a previous time point, call it $t=0$ without loss of generality. ie one should know $P(X_1=j\mid X_0=i)$ and $P(X_0)$.

0
On

When a random time τ appears, the future development may no longer satisfy the Markov property because the occurrence time of this event is uncertain, so the state after that time may depend on previous states and the realization of the random time. In other words, if we do not consider the correlation between the random time τ and the state, the future development no longer satisfies the Markov property.