Verifying the Markov property

1.4k Views Asked by At

We throw a dice infinitely often. Define $U_n$ to be the maximal number shown up to time $n$. How can I verify that $$ \mathbb{P}(U_{n+1}=u_{n+1}|U_n=u_n,\ldots,U_1=u_1)=\mathbb{P}(U_{n+1}=u_{n+1}|U_n=u_n), $$ i.e. that the Markov property is fullfilled?

I have no idea, I only can reproduce the definition: $$ \mathbb{P}(U_{n+1}=u_{n+1}|U_n=u_n,\ldots,U_1=u_1)=\frac{\mathbb{P}(U_{n+1}=u_{n+1},U_n=u_n,...,U_1=u_1)}{\mathbb{P}(U_n=u_n,...,U_1=u_1)}. $$

But I do not know how to show the Markov property now.-

Hope you can help me.

Greetings

2

There are 2 best solutions below

0
On BEST ANSWER

This is essentially asking "Prove that the probability of the $n+1^{th}$ event depends only on the outcome of the $n^{th}$ event, and not on the ones prior to that." In this context, it is clear that the probability of $U_{n+1}$ being $u_{n+1}$ definitely depends on what $U_n$ was - clearly, $U_{n+1}\geq U_n$, however, knowing what $U_{n-1}$ was provides no new information (or knowing $U_{i}$ for any $i<n$, to be more general).

To prove this, you can simply write out the distribution of $U_{n+1}$ given the past history - splitting into three cases. For $u_{n+1}<u_n$, which can never happen since the maximum roll always increases, we get, $$P(U_{n+1}=u_{n+1}|U_n=u_n,\ldots,U_1=u_1)=0$$ Then, for $u_{n+1}=u_n$, where the new die roll is equal to or less than some previous one, and hence could be any value less than or equal to $u_n$, we get $$P(U_{n+1}=u_{n+1}|U_n=u_n,\ldots,U_1=u_1)=\frac{u_n}{6}$$ And for $u_{n+1}>u_n$, where the new die roll is the new record, we have simply the probability of that roll occurring $$P(U_{n+1}=u_{n+1}|U_n=u_n,\ldots,U_1=u_1)=\frac{1}{6}.$$

However, notice that the left side of these equations only uses the value of $u_n$ - thus, the probability of $P(U_{n+1}=u_{n+1}|U_n=u_n)$ would be equal to the above, since it would give the exact same expression; we gained no new information by knowing the entire history of $U_i$, and hence the Markov property holds.

2
On

In this specific example, one can argue as follows. The event $\{U_n=u_n\}$ contains all the information one needs to calculate the probability distribution of $U_{n+1}$, since all one needs is the maximal value up to time $n$ (i.e. the value of $U_n$), which will be compared with the new dice outcome at time $n+1$. Having information about all the past history $\{U_n=u_n,\dots,U_1=u_1\}$ does not change this probability. Hence

$$\mathbb{P}(U_{n+1}=u_{n+1}~|~U_n=u_n,\dots,U_1=u_1)=\mathbb{P}(U_{n+1}=u_{n+1}~|~U_n=u_n).$$

In my humble opinion, several times an argument like this is clearer than a boring formal derivation and more than enough to prove the Markov property.