Expected Hitting time given initial conditions for a bounded random walk

861 Views Asked by At

Suppose we have a random walk in 1 dimension bounded between $0$ and $N$. $0$ is an absorbing state. The random walker can move up by $1$ down by $1$ or stay the same. I want to find the expected time it takes to reach $0$ given that it started at some point $k$ (specifically $1$, but a more general formula is fine as well).

I know this is equivalent to finding $E(T \mid X_0 = k)$ where $T = \inf \{ t>0 : X_t=0 \}$ is the hitting time. I have $1$ step transition probabilities as some functions of $k$ (I can give the full form, if requested but don't think it's relevant). These are

$$ P(k+1 \mid k) = b(k), \qquad P(k-1 \mid k) = d(k), \qquad P(k \mid k) = 1- b(k) - d(k) $$

I tried conditioning on the first step and with a bit of rearranging got

$$E(T \mid k) = \frac{1+d(k)E(T \mid k-1) + b(k)E(T \mid k+1)}{d(k)+b(k)} $$

I also know that for the bounds.

$$ E(T \mid 0) = 1, \qquad E(T \mid N) = \frac{d(N)E(T \mid N-1)d(N) + 1}{d(N)} $$

I've been trying to work out how to use recursion in order to obtain a closed form solution for $E(T \mid k)$ (specifically $E(T \mid 1)$ solely in terms of functions $b(k)$ and $d(k)$). But because the recursion as expressed can go forward or backwards I've been struggling. What algebraic wizardry could I use to solve this problem?

1

There are 1 best solutions below

1
On

If you just want the hitting time $\mathbb E[T \mid X_0 = 1]$, then that's much easier to solve for than the general $\mathbb E[T \mid X_0 =k]$.

First of all, if we don't make state $0$ absorbing, but instead have the random walk from $0$ move up to $1$ with probability $1$, then $\mathbb E[T \mid X_0 = 1]$ is one less than the expected time between two visits to state $0$. It's a well-known result that if the (ergodic, finite-state, discrete) Markov chain has stationary distribution $\pi$, then the mean time between visits to state $j$ is $\frac{1}{\pi_j}$. Therefore $\mathbb E[T \mid X_0=1] = \frac{1}{\pi_0} - 1$.

Second, it is straightforward to solve for $\pi_0$ from the symmetry equations $$\pi_k b(k) = \pi_{k+1} d(k+1).$$ (These equations come from observing that $\pi_k b(k)$ measures the average frequency with which the random walk goes from $k$ to $k+1$ and $\pi_{k+1} d(k+1)$ measures the average frequency with which the random walk goes from $k+1$ to $k$. But these events alternate, so they happen with the same limiting frequency.)

In terms of $\pi_0$, we get $\pi_1 = \pi_0 \cdot \frac{1}{d(1)}$, $\pi_2 = \pi_0 \cdot \frac{b(1)}{d(1)d(2)}$, and in general $$\pi_k = \pi_0 \cdot \frac{b(1)b(2)\dotsb b(k-1)}{d(1)d(2)\dotsb d(k)}.$$ So the sum $\pi_0 + \pi_1 + \dots + \pi_N$ is given by $$\pi_0\left(1 + \frac{1}{d(1)} + \frac{b(1)}{d(1) d(2)} + \dots + \frac{b(1)b(2)\dotsb b(N-1)}{d(1)d(2)\dotsb d(N)}\right).$$ But this sum must equal $1$, letting us solve for $\pi_0$. In particular, $\frac{1}{\pi_0}$ is just the expression in the parentheses, so $$\mathbb E[T \mid X_0 = 1] = \frac{1}{\pi_0} - 1 = \frac{1}{d(1)} + \frac{b(1)}{d(1) d(2)} + \dots + \frac{b(1)b(2)\dotsb b(N-1)}{d(1)d(2)\dotsb d(N)}.$$