Calculating the Probability of an Event in a Stochastic Process

98 Views Asked by At

Consider a process $X_t$, given by $X_0 = 1$ and $X_t = X_{t-1} + B_t$ where

$$ B_t = \begin{cases} 1 \ \ &\text{ with probability } \frac{p}{2} \\ 0 \ \ &\text{ with probability } \frac{1}{2} \\ -1\ \ &\text{ with probability } \frac{q}{2} \end{cases} $$ and $p+q=1$.

Consider the event $E_k = \{X_s = 0 \text{ for some } s>0, X_0 = k\}$.

Calculate $\mathbb{P}(E_1)$ by conditioning on the outcomes of $B_1$.


So here is my attempt:

We have $E_1 = \{X_s = 0 \text{ for some } s>0, X_0 = 1\}$ and so conditioning on the outcome of $B_1$ we have

$$ \begin{align} \mathbb{P}(E_1) &= \mathbb{P}(E_1 | B_1 = 1)\mathbb{P}(B_1 = 1) \\ &\qquad + \mathbb{P}(E_1 | B_1 = 0)\mathbb{P}(B_1 = 0) + \mathbb{P}(E_1 | B_1 = -1)\mathbb{P}(B_1 = -1)\\[2mm] &= \frac{p}{2} \mathbb{P}(X_t \text{ eventually reaches zero } | X_1 = 2)\\ &\qquad+ \frac{1}{2} \mathbb{P}(X_t \text{ eventually reaches zero } | X_1 = 1) + 1 \cdot \frac{q}{2}\\[2mm] &= \frac{p}{2} \mathbb{P}(E_1)^2 + \frac{1}{2} \mathbb{P}(E_1) + \frac{q}{2} \end{align} $$ Which gives us a quadratic in $\mathbb{P}(E_1)$, so

$$ \mathbb{P}(E_1) = \frac{1}{p} - \sqrt{\frac{q}{p}} \qquad\text{ OR }\qquad \sqrt{\frac{q}{p}} $$

But I'm stuck as to how to decide on a solution? I assume there must be some constraints which force one of these solutions but I don't know what..


EDIT: I've been told that I should get the solution $\mathbb{P}(E_1) = \frac{q}{p}$ for $p > q$ or $\mathbb{P}(E_1) = 1$ for $p \leq q$ and that my method is along the right lines, but I'm still really struggling to get this

2

There are 2 best solutions below

3
On BEST ANSWER

This is not a full answer, but at the same time it was too long for a comment and maybe could help in finding the general solution.

First of all unfortunately none of the solutions you found can be probabilities, since for example if $p = \frac{1}{4}$ and $q = \frac{3}{4}$ then: $$ \frac{1}{p}-\sqrt{\frac{q}{p}} = 4-\sqrt{3} > 1 $$ $$ \sqrt{\frac{q}{p}} = \sqrt{3} > 1 $$

Here I have proved that if we know that the value of $P(E_1)$ then we can compute $P(E_n)$ for all $n\geq 0$.

The stochastic process $X_t$ is often called the "lazy" random walk on the integer $\mathbb{Z}$. In particular we have that: $$ P(E_1) = P\Bigl(\bigcup_{s=1}^{\infty}\{X_s = 0\}\Bigr). $$ But we can also compute $P(E_1)$ using the law of total probability as follows: $$ P(E_1) = P(E_1|B_1=1)P(B_1 = 1) + P(E_1|B_1 = 0)P(B_1 = 0) + P(E_1|B_1 = -1)P(B_1 = -1) \\ = \frac{p}{2}P(E_2)+\frac{1}{2}P(E_1)+\frac{q}{2} $$ which implies that: $$ P(E_2) = \frac{P(E_1)-q}{p} $$ Similarly we can compute $P(E_2)$ as: $$ P(E_2) = P(E_2|B_2=1)P(B_2 = 1) + P(E_2|B_2 = 0)P(B_2 = 0) + P(E_2|B_2 = -1)P(B_2 = -1) \\ = \frac{p}{2}P(E_3)+\frac{1}{2}P(E_2)+\frac{q}{2}P(E_1) $$ From here we understand the 2-step recurrence: $$ P(E_n) = \frac{P(E_{n-1})-qP(E_{n-2})}{p} $$ with boundary conditions: $$ P(E_0) = 1 \\ P(E_1) = E1 $$ Using the following code in Mathematica we can see that for $p\not=q$:

FullSimplify[
 RSolve[{a[n + 2] == (a[n + 1] - q*a[n])/p, a[1] == E1, a[0] == 1}, a,
   n]]

we find a general formula for $P(E_n)$, namely: General formula for <span class=$P(E_n)$ using $E_1$" /> while for $p = q = \frac{1}{2}$, using again the recurrence we find: $$ P(E_n) = n(P(E_1)-1)+1 $$ from which we understand that $P(E_1)$ in this case has to be $1$ (since $P(E_n)\in[0,1]$ for all $n$).

0
On

We have to assume that the variables $B_1,B_2,\dots, B_s,\dots$ are independent (as a family), else the mentioned argument does not work.

I am doing so in the sequel.

They also have the same distribution, as indicated, each member of the family takes the possible values $1,0,-1$ with probability respectively $p/2$, $1/2$, $q/2$.

(Such a situation is denoted by "iid" often in the courses and the posts, for independent, and identically distributed.)

The two degenerated cases

  • $p=0$, $q=1$, we stay or go down with the sums,
  • $p=1$, $q=0$, we stay or go up with the sums, have easy probabilities for $\Bbb P(E_1)$, they are $1$, and respectively $0$, and we (heuristically) expect them in the limit. Assume from now on $pq\ne 0$.

Let us denote the probability for the event $A(s)$ defined to cover all "paths of the world" so that at least one of the sums $B_s$, $B_s+B_{s+1}$, $B_s+B_{s+1}+B_{s+2}$, and so on, $B_s+B_{s+1}+\dots+B_t$, and so on, takes the value $-1$. Because of the assumed iid-property, all $A(s)$ have the same probability. And this value is also the same, conditioned by a given value of previous variables $B_1$, ... , $B_{s-1}$. Let us denote by $x$ this probability, so $x=\Bbb P(E_1)$ is the wanted number. Then with the OP argument we have: $x = \frac 12(px^2+x+q)$, so the equation in $x$ to be solved is: $$ px^2 - x + q=0\ . $$ It has the two positive solutions $1$ and $\frac qp$. Which one shall we take?

In case of $q\ge p$ we simply take the one, the probability cannot exceed this number.



From now on we assume $p>q$. I don't see a way to distinguish between the two roots now by just simply conditioning on the outcome of $B_1$. Instead, i will give the usual "gambler ruin" argument.

First of all, we will replace the $(p,1/2,q)$-trinomial model with a binomial model, to have an easier situation. For this we let the many $B's$ show their outcomes in time, and "stop the clock" each time when there is a $+1$ or a $-1$ result. The first stop in time is when the first $\pm 1$ occurs, the second stop i time is when the second $\pm 1$ occurs, the third stop is when the third $\pm 1$ occurs, and so on. In a given situation, which is the probability for a next $+1$? This is the sum of all the probabilities for the chains $1$, $01$, $001$, and so on, this is: $$ \frac p2\left(1+\frac 12+\frac 14+\frac 18+\dots\right)=\frac p2\cdot 2=p\ . $$ For a next $-1$ we can apply the same argument to get $q$, or pass the complement $1-q=p$ (since an infinite chain of zeros has probability zero). So we work better in a binomial model with parameters $(p,q)$.

Fix some $N\ge 1$. Consider the following game. A gambler comes into the casino with an amount $k$, and plays "simple games" with outcomes $\pm 1$ till the ruin $0$ amount, or the amount $N$ is reached. Which is the win probability $P_k=P_k(N)$ (for each $k$ between $0$ and $N$)? We have of course: $$ \begin{aligned} P_0 &=0 &&\text{(Starting in zero, stops the game. No chance to win.)}\\ P_1 &= pP_2+qP_0\ ,\\ P_2 &= pP_3+qP_1\ ,\\ &\vdots\\ P_{N-1}&=pP_N+qP_{N-2}\ ,\\ P_N&=1\ .&&\text{(Starting in $N$, stops the game. It is a win.)} \end{aligned} $$ So for each $k$ with $0<k<N$ we have $(p+q)P_k=pP_{k+1}+qP_{k-1}$, leading to $p(P_{k+1}-P_k)=q(P_k-P_{k-1})$, so $$ \begin{aligned} P_2-P_1 &=\frac qp(P_1-P_0)=\frac qp P_1\ ,\\ P_3-P_2 &=\frac qp(P_2-P_1)=\left(\frac qp\right)^2 P_1\ ,\\ &\ \vdots\\ (P_N-P_{N-1}) &=\frac qp (P_{N-1}-P_{N-2})=\dots=\left(\frac qp\right)^{N-1} P_1\ ,\\ &\qquad\text{ and after adding }\\ 1 &=P_N \\ &=(P_N-P_{N-1})+\dots+(P_3-P_2)+(P_2-P_1)+P_1 \\ &=\left(\ \left(\frac qp\right)^{N-1} + \dots+ \left(\frac qp\right)^2 +\left(\frac qp\right)^1 +1 \right)P_1 = \frac {\displaystyle 1-\left(\frac qp\right)^N} {\displaystyle 1-\left(\frac qp\right)}P_1 \ . \end{aligned} $$ So with a positive probability $P_1$, starting in $1$, the gambler wins. Explicitly, $P_1$ is $$ \bbox[lightyellow]{\qquad P_1 = P_1(N) = \frac {\displaystyle 1-\left(\frac qp\right)} {\displaystyle 1-\left(\frac qp\right)^N} \qquad} \ . $$ Example: In case of $p=\frac {13}{25}$ and $q=\frac {12}{25}$, assume the gambler wants to reach $100$. And this happens indeed in mean with probability: $$ P_1(100) = \frac {1-\frac{12/25}{13/25}} {1-\left(\frac{12/25}{13/25}\right)^{100}} = \frac{1-\frac {12}{13}}{1-\left(\frac{12}{13}\right)^{100}} \approx P_1(\infty)= 1-\frac {12}{13}=\frac 1{13}\ . $$ In the limit, with $N\to\infty$, the probability to never reach zero is $$ P_1(\infty)=\lim _N P_1(N) = 1-\frac qp\ . $$ And our $x$ is the probability to reach zero at some point, so it is $x=1-P_1=\frac qp$, "the right choice" among the two roots.