Tricky Markov Chain

213 Views Asked by At

I found this problem a bit tough and was wondering if you could give it a go (especially the last part). This goes as follow :

A gambler wins $1$ dollar at each round, with probability $p$, and loses $1$ dollar, with probability $1 − p$. Different rounds are assumed independent. The gambler plays continuously until either accumulating a target amount of $m$ dollars, or losing all the initial money. What is the probability of eventually accumulating the target amount (winning)?

We introduce the Markov chain whose state $i$ represents the gambler’s wealth at the beginning of a round. The states $i = 0$ and $i = m$ correspond to losing and winning, respectively.

All states are transient, except for the winning and losing states which are absorbing. Thus, the problem amounts to finding the probabilities of absorption at each one of these two absorbing states. Of course, these absorption probabilities depend on the initial state $i$.

enter image description here

Which is a picture of the chain for only $5$ states $(0-4)$

Let us set $s = 0$ in which case the absorption probability $a_i$ is the probability of losing, starting from state $i$. These probabilities satisfy :

$$a_0 = 1,\ a_m = 0$$ $$a_i = (1 − p)\cdot a_{i−1} + p\cdot a_{i+1},\ i= 1, . . . , m − 1$$

It's an non-mandatory exercise, but as I have noticed for quite a while now, those are the more rewarding. I think I'll work on it tomorrow.

Any help welcome :)

Keep

2

There are 2 best solutions below

0
On

In general, we have (given to unedited version of the question, i didn't fully understand):

$$P_r(s_0)=(1-p)P(s_1)=a$$ $$P_r(s_1)=(1-p)P(s_2)$$ $$P_r(s_2)=pP(s_1)+P(s_3)(1-p)$$ $$P_r(s_3)=pP(s_2)$$ $$P_r(s_4)=pP(s_3)=b$$

$$\sum_{i=0}^{4} P(s_i) = 1\ \ \ \ (1)$$

For more tractability, I refine above $5$ equations as the below matrix representation :

$$\begin{pmatrix}a\\ P_r(s_1)\\P_r(s_2)\\P_r(s_3)\\b \end{pmatrix} = \begin{pmatrix} 0 & 1-p & 0 & 0 & 0 \\ 0 & 0 & 1-p & 0 & 0 \\0 & p & 0 & 1-p & 0 \\0 & 0 & p & 0 &0 \\0 & 0 & 0 & p & 0 \\ \end{pmatrix} \begin{pmatrix}a \\ P_r(s_1)\\P_r(s_2)\\P_r(s_3)\\b \end{pmatrix}$$

Now, you can calculate probability of each state with respect to $p$ Then using equation (1), the probability of each state can be derived

0
On

Hi and thanks for your answers,

I ended up solving it by taking steps when studying the behavior of the Markov chain with these 2 absorbing states. Toying a bit with a smaller problem (read not generic) I came up with a standard system of equations, then when writing the concrete results I noticed that the ratio between state $a_i$ to $a_{i+1}$ is always the reciprocal of $\frac{p}{1-p}$ then I set $r = \frac{1-p}{p}$.

Adding all the delta transitions together ends up giving one, then you can find (a bit like suggested Greg) $\delta$ with the formula :

$ \delta \sum_0^{n-1} r^i = 1 $

Which becomes with the handy finite geometric sum formula :

$ \delta \frac{1-r^n}{1-r} = 1 $, $ \delta = \frac{1-r}{1-r^n} $

Then for finding any $a_i$ we sum up $\delta$ multiplied by the index $i$ :

$ a_i = \frac{1-r}{1-r^n} * \sum_0^{n-1} r^i $

Which when all simplified becomes :

$ a_i = \frac{1-r^i}{1-r^n} $

When r = 1 then the ratio is always the same and we can just sum up $\delta$ and divide by the number of elements.

Well, well... Was definitely a good exercise for me as I needed some practice =].

Then lets play with the "frequentist" version on big data and thousands of states...

Keep.