Asymmetric Random Walk on $\mathbb{Z}$

641 Views Asked by At

Suppose we have an asymmetric random walk on $\mathbb{Z}$ starting at $0$, with transition probabilities $p(x,x+1)=\frac{1}{3}$ and $p(x, x-1)=\frac{2}{3}$. What is the probability that this random walk ever reaches some positive integer $n$?

I see that this random walk is asymmetric and the probability that it ever reaches any negative integer is $1$. But I am not sure about the case for positive $n$. I can only intuitively guess that the probability gets closer to $0$ the bigger $n$ is, but I was wondering how one would solve for each $n$.

2

There are 2 best solutions below

0
On BEST ANSWER

Let $a$ be the probability that you ever reach $n+1$ from $n$. Then we know that $$a=\frac23a^2 + \frac13^*$$ which gives us $a=\frac12$, as $a\neq 1$.

*How did I get this? Notice that the probability you ever reach $n+1$ in the next move is $\frac13$ -- and if you don't reach it in one move, then you go down a step. This means the probability of reaching it is $a^2$ because you need to move up twice, for a total of, well, what it says there.

So your answer is just $a^n=\frac1{2^n}$.

0
On

Models which exhibit left non-skipping behavior, i.e., $ p_{i,i-k} = 0 $ for $ k \geq 2$ and have space homogeneity (I will explain this) for states $ i \geq 1$, there is a neat solution. Obviously random walk satisfies both conditions. But there are many other models.

Let us say we have a Markov chain on the state space $ \{ 0,1,2, \dotsc \} $ (we will argue soon that such a state space will be enough). We assume that for all states $ i \geq 1 $, we have a space-homogeneous transition probability, i.e., $ p_{i, j} = q ( j-i+1) $ for $ j \geq i-1 $ with $ \sum_{ k = 0}^{\infty} q(k) = 1 $. More precisely, the transition matrix will look like the following: \begin{align*} \mathbf{P} = \begin{bmatrix} p_{0,0} & p_{0,1} & p_{0,2} & p_{0,3} & p_{0,4} & \dotsc \\ q(0) & q(1) & q(2) & q(3) & q(4) & \dotsc \\ 0 & q(0) & q(1) & q(2) & q(3) & \dotsc \\ 0 & 0 & q(0) & q(1) & q(2) & \dotsc \\ \vdots & \vdots & \vdots & \vdots & \vdots & \cdots \end{bmatrix}. \end{align*} We further assume that $ q (0) > 0 $ and $ q(k) > 0 $ for some $ k \geq 2 $. These are made to make that every state $ k > 0 $, we have $ k \leadsto 0$. The transition probabilities from the state $0$ are not important. Clearly this has left non-skipping behavior. Models like this arise in discrete queuing systems.

Consider the random variable $ \xi $ taking values in $ \{ 0, 1 , \dotsc \}$ with $ \mathbb{P} ( \xi = k ) = q(k) $ for $ k \geq 1$. The interpretation of $ \xi $ is that, instead of a coin, we are given this random variable $ \xi $. At each time, when we are not at $0$, we simulate from this random variable and if the simulation provides us a value $ k $, then we move by $ k-1$ from the present position, i.e., $ X_{n+1} = X_n + \xi - 1 $ when $ X_n > 0 $.

Consider the probability generating function $ \alpha(s) = \sum_{ k = 0 }^{ \infty } q(k) s^k $. Let $ \rho $ be the smallest solution of $ \alpha(s) = s $. Note that $ \rho < 1 $ if and only if $ \alpha^{\prime}(1) = \mathbf{E} ( \xi) > 1 $. Then, for $ i \geq 1 $, we have \begin{equation*} f_{i,0}^{\star} = \begin{cases} 1 & \text{ if } \rho = 1 \\ \rho^{i} & \text{ if } \rho < 1 \end{cases} \end{equation*} where $ f_{i,0}^{\star} $ is the probability of ever reaching $0$ from $ i $.

In the case of random walk, we first ignore the states below $0$ because of the non-skipping behavior, as from any $ i \geq 1 $, to reach a state $ k < 0 $, we must first reach $0$ and then only we would be able to visit $k$. Also, it is clear that for $ f_{ i,0}^{\star} $ only transitions from $ i \geq 1 $ are needed, as we are interested in reach $0$, i.e., what happens after reaching $0$, i.e., transition probabilities from $0$ are not required. That is why we can take any transition probabilities from $0$.

We are interest in the case when $ \xi $ takes two values $ 0 $ with probability $q $ and $2$ with probability $p$. In other words, we have \begin{equation*} q(i) = \begin{cases} q & \text{ if } i = 0 \\ p & \text{ if } i = 2 \\ 0 & \text{ otherwise. } \end{cases} \end{equation*} Thus, $ \alpha (s) = q + ps^2 $. The smallest solution of the equation $ \alpha(s) = s $ is smaller than $1$ if and only $ \alpha^{\prime}(1) = \mathbb{E} ( \xi) = 2p > 1 $, i.e., $ p > 1$. If $ p \leq 1/2 $, we have $ f_{i,0}^{\star} = 1 $ for all $ i > 0 $. For $ p > 1/2 $, solving the quadratic we have $ \rho = q / p < 1 $ and $ f_{i,0}^{\star} = (q / p)^{i} $ for all $ i > 0 $.