Recurrence of random walk in a finite set

108 Views Asked by At

Let $(S_n)_{n \geq 0}$ be a random walk in $\Bbb Z^d$ starting at $x$. Let $$N=\sum_{n \geq 0}\Bbb 1_{S_k=x} $$ be the random variable representing the number of visits of $(S_n)$ to its starting point $x$.

I know that if $d \geq 3$, then $(S_n)$ is transient i.e. $\Bbb P(N < \infty)=1$. But what if the random walk $(S_n)$ is constrained to a finite set $G \subset \Bbb Z^d$ ? To me, $(S_n)$ will have "much less space" to move, infinitely less space than in $\Bbb Z^2$ where all walks are recurrent so $\Bbb P(N = \infty)=1$ when $S_n$ is a finite subset of $\Bbb Z^d$. However I cannot find a formal proof of this fact. Does someone have one or a link/document I can read ?

2

There are 2 best solutions below

0
On BEST ANSWER

Knowing that our "random walk" involves only a finite number of states does not by itself imply that "all walks are recurrent." If the transition probabilities between states depend only on the states (and not upon what time a step is taken), then the random walk can be modelled as a finite Markov process. That is, we can write the row stochastic matrix of transition probabilities from state $i$ to state $j$ with entries $p_{ij}$ as $M$. Multiplying a row vector $v= (0,\ldots,0,1,0,\ldots,0)$ representing an initial state $x$ by $1$ in the $x$ position (otherwise zero) by a positive integer power $n$ of $M$:

$$ v M^n $$

gives us a row vector with the probability distribution of states after the $n$th step.

These definitions are conventional for Markov chains: a state is recurrent if starting from that means it will recur infinitely often with probability $1$, a state is transient if starting from that means it will recur infinitely often with probability $0$.

Denoting by $P_i$ the probability that after starting from state $i$ the Markov process will later return to state $i$ at least once, it can be shown state $i$ is recurrent if $P_i = 1$ and transient if $P_i < 1$.

It follows that every state is either recurrent or transient (never both). In a finite Markov process there will always be at least one recurrent state, and it is possible for all states to be recurrent (so in some cases there could be no transient states). See also this previous Question about the need for infinitely many states to have all states be transient.

A simple example of a Markov process with at least one transient state is to have an aborbing state together with another state, say given by the matrix:

$$ \begin{pmatrix} 1/2 & 1/2 \\ 0 & 1 \end{pmatrix} $$

0
On

State $j$ is recurrent iff state $i$ is recurrent if $p_{i,j}^r >0$ and $p_{j,i}^s >0$ for some $r,s<\infty$.

For finite connected $G$, every state can be reached from every other state i.e., $p_{i,j}^{n_{i,j}}>0$ for some $n_{i,j}<\infty$ for every $i,j$ and one of the states say state $m$ is recurrent. Hence all states are recurrent.

See: https://www.uni-ulm.de/fileadmin/website_uni_ulm/mawi.inst.110/lehre/ws13/Stochastik_II/Skript_2B.pdf