I was just wondering about $2D$ random walks when I got the idea of a position dependent $2D$ random walk:
A man is initially at $(x,y)$ and can move in a line parallel to the X and Y-axis only. The probability that the man takes a step in the X-direction is $\frac {|x|}{|x|+|y|}$ and the probability that the man takes a step along the Y-direction is $\frac {|y|}{|x|+|y|}$. Given that the man takes a step in the X-direction the probability that he takes a step in $+$ve x-direction is $\frac {|x|}{1+|x|}$ and that he takes a step along the negative X-direction is $\frac 1{1+|x|}$. Given that he takes a step in the y-direction probability that he takes a step along the positive Y-direction is $\frac 1{1+|y|}$ and probability that he takes a step in the negative Y-direction is $\frac {|y|}{1+|y|}$.
Find the probability that his motion along a direction stops after $n$ steps. Also find probability that his motion stops along a direction at some point of time. (Consider that reaching the origin at any point of time will terminate his walk.)
I have no idea about how to do the question, I think some kind of recurrence relation would help but which one?
Once the x-coordinate reaches 0, it never leaves. Same for the y-coordinate. Let's compute the probability that we eventually get $x=0$.
Define $t_k$ as the time just after the $k$th horizontal step (define $t_0=0$). Define $X(t_k)$ as the x-coordinate just after the $k$th horizontal step. Then $\{X(t_0), X(t_1), X(t_2), \ldots\}$ is a 1-d Markov random walk with: $$ Pr[X(t_{k+1}) = i+1| X(t_k)=i] = \frac{i}{i+1} \: \: \forall i \in \{1, 2, 3, \ldots\} $$ Assume we start at a positive integer $X(t_0)=x_0>0$. For $i \in \{0, 1, 2, \ldots\}$ define: $$ p_i = \mbox{Probability we eventually reach 0, given we start at $x$-location $i$} $$ Then $p_0=1$, and we get the recurrence relation: $$ p_i = \left(\frac{i}{i+1}\right)p_{i+1} + \left(\frac{1}{i+1}\right)p_{i-1} \: \: \forall i \in \{1,2, 3, \ldots\} $$ Manipulating this equation gives: $$ \frac{p_{i+1}-p_i}{p_i-p_{i-1}} = \frac{1}{i} $$ Multiplying this equation over $i \in \{1, \ldots, K\}$ gives: $$ \frac{1}{K!} = \prod_{i=1}^{K} \frac{p_{i+1}-p_i}{p_i-p_{i-1}} = \frac{p_{K+1}-p_K}{p_1-p_0}$$ Hence, for all $K \in \{1, 2, 3, \ldots\}$: $$ p_{K+1}-p_K = \frac{(p_1-p_0)}{K!}$$ Clearly the above also holds for $K=0$. Thus, summing over $K \in \{0, 1, \ldots, M-1\}$ for $M>0$ gives: $$ p_M-p_0 = (p_1-p_0)\sum_{K=0}^{M-1}\frac{1}{K!} $$ Since $p_0=1$ we have for all $M>0$: $$ p_M = 1 - (1-p_1)\sum_{K=0}^{M-1} \frac{1}{K!} $$ However, the positive drift of this Markov chain shows that $\lim_{M\rightarrow\infty} p_M=0$. Thus: $$ 0 = \lim_{M\rightarrow\infty} p_M = 1-(1-p_1)e $$ and hence $p_1 = 1-1/e$. Hence, for all $M \in \{1, 2, 3, \ldots\}$ we get: $$ \boxed{p_M = 1 - \frac{1}{e}\sum_{K=0}^{M-1}\frac{1}{K!} } $$
As a minor detail, it can be shown that, given $x>0$, we will eventually (with prob 1) take another step along the horizontal direction (regardless of the current $y$ value). Thus, $\{X(t_0), X(t_1), X(t_2), \ldots \}$ either goes on forever, or stops when we reach 0. Same for when $y>0$.
The probability of eventually hitting $y=0$, given the initial location of $y$, can be computed similarly. The event of eventually hitting the $x$-axis is independent of the event of eventually hitting the $y$-axis, so the probability of eventually hitting both is just the product of the two.
The problem defines the vertical motion to have a negative drift (different from the horizontal motion). So, the probability of eventually hitting $y=0$ is equal to 1.
Overall, given initial location $(x_0,y_0)$ such that $x_0 \geq 0$, $y_0\geq 0$, the probability of eventually reaching $(0,0)$ is $p_{x_0}$ (as defined in the boxed equation above), so it does not depend on $y_0$. Of course, if we ever do reach $(0,0)$, the time required to do so depends on $y_0$.
Here is a possibly helpful thought experiment: Imagine simulating the system as follows: Fix $(x_0,y_0)\geq (0,0)$. Generate an infinite sequence $\{A_0, A_1, A_2, \ldots\}$ as a 1-d Markov random walk with $A_0=x_0$, and with state $0$ absorbing (and with appropriate transition probabilities, i.e., $Pr[A_{k+1}=i+1|A_k=i]=\frac{i}{i+1}$ for $i>0$). Independently generate an infinite sequence $\{B_0, B_1, B_2, \ldots\}$ as a 1-d Markov random walk with $B_0=y_0$ and with state $0$ absorbing (and with $Pr[B_{k+1}=i+1|B_k=i]=\frac{1}{i+1}$ for $i>0$).
Now generate the 2-d $(X(t),Y(t))$ process as follows: Define $(X(0),Y(0))=(A_0,B_0)$. For all slots $t \in \{0, 1, 2, \ldots\}$ do: If $(X(t),Y(t))=(0,0)$, choose $(X(t+1),Y(t+1))=(0,0)$. Else, independently flip a biased coin with $Pr[HEADS] = \frac{X(t)}{X(t)+Y(t)}$. If HEADS, define $Y(t+1)=Y(t)$ and define $X(t+1)$ as the next unused $A_k$ value. If TAILS, define $X(t+1)=X(t)$ and define $Y(t+1)$ as the next unused $B_k$ value.
The sequence $\{A_0, A_1, A_2, \ldots\}$ in the above thought experiment is exactly what I have been calling the “embedded sequence” $\{X(t_0), X(t_1), X(t_2), \ldots\}$. An example $\{(X(t),Y(t))\}_{t=0}^{\infty}$ trajectory is: $$ \{(\boxed{A_0},B_0), (A_0, B_1), (A_0,B_2), (\boxed{A_1}, B_2), (\boxed{A_2},B_2), (A_2, B_3), (A_2, B_4), (\boxed{A_3}, B_4), (A_3, B_5), \ldots\} $$ where I have boxed the embedded sequence $\{X(t_k)\}_{k=0}^{\infty}$.
The $\{A_k\}$ and $\{B_k\}$ sequences are independent of each other (by their construction). Also:
i) $X(t)$ eventually hits 0 if and only if $\{A_k\}_{k=0}^{\infty}$ eventually hits zero.
ii) $Y(t)$ eventually hits 0 if and only if $\{B_k\}_{k=0}^{\infty}$ eventually hits zero.