A random walk on the clock

248 Views Asked by At

The setting is simple but I just don't know how to calculate a desired quantity, which will be introduced later.


Suppose we have a clock with $M$ points on it, i.e. $\{0, 1, 2, ..., M-1\}$, where $M\in\mathbb N$.

Suppose we starts at some point on the clock, say $a$, where $0\leq a\leq M-1$.

We have a probability of $1/2$ to stay at point $a$, and a probability of $1/2$ to move. If we move, then we will have a probability $p$ to move clockwise and a probability $1-p$ to move anti-clockwise, where $0<p<1$.

Let $X_n$ denote our location after $n$ steps. We define $d(x)=\min\{x,M-x\}$ for all $0\leq x \leq M-1$, i.e. the distance between $x$ and the point $0$.


Question: Calculate $\lim_{n\to \infty} \mathbb E[d(X_n)]$, where $\mathbb E$ means expectation.

I have been thinking this problem for quite a while but don't know how to approach it. We cannot use the same argument as in this problem (cf. @Did 's post in the answer) because $\mathbb E[d(X_n)|X_0=y]$ may be different for different $y$, where $0\leq y \leq M-1$. That's, our problem doesn't have "symmetry".

Also, I don't know if we need to consider $\lim_{n\to \infty} \mathbb E[d(X_n)]$ as a whole or consider the limit and the expectation separately, which means first calculating the expectation and then taking limit.

Thanks for help!

1

There are 1 best solutions below

4
On BEST ANSWER

Existence of a Stationary Distribution

First note that the Markov chain you describe is on a finite state space, it is aperiodic, and irreducible.

As a result we can use the convergence theorem for Markov chains (eg. Theorem 4.9 In Levin and Peres), which states that a unique stationary distribution for the chain exists, and in the limit the time spent in each state is given by the stationary distribution.

Identifying the Stationary Distribution

In general, if a stationary distribution, $\pi$, exists (which in our case from above we know it does) satisfies the equation

$$\pi = \pi P$$

(Definition 1.5.1 in Levin and Peres).

For your Markov chain, it is a relatively straightforward matrix calculation to show that the stationary distribution is

$$\pi = \left(\frac1M, \ldots, \frac1M\right)$$

So in the limit the probability the Markov chain is in any given state is uniform:

$$\lim_{n \rightarrow \infty} \mathbf P[X_n = m | X_0 = a] = \frac1M$$

Exchanging Order of Limits/Expectations

Because the Markov chain has a finite state space, you are safe to exchange the order of limits and expectations (formally this is the Bounded Convergence Theorem, which in turn is a simple case of the Dominated Convergence Theorem). Combining this with the Law of Total Expectation

$$ \begin{align*} \lim_{n\rightarrow \infty} \mathbf E[d(X_n)] & = \lim_{n\rightarrow \infty} \sum_{m=0}^{M-1} d(m) \mathbf P[X_n = m] \\ & = \sum_{m=0}^{M-1} d(m) \lim_{n\rightarrow \infty} \mathbf P[X_n = m] \\ & = \frac1M \sum_{m=0}^{M-1} d(m) \end{align*} $$

Evaluating the Sum

Finally it remains to evaluate the sum over $d(m)$. Letting $D(M) = \sum_{m=0}^{M-1}d(m)$, then from examining some cases for small $M$ one can see that $D(M)$ satisfies the recurrence relation

$$D(M) = D(M-1) + \Bigl\lfloor\frac{M}{2} \Bigr\rfloor,$$

which for $M = 1,\cdots, 6$ is $0,1,2,4,6,9,12$. Checking the OEIS, this sequence is known as the quarter squares and has the closed form:

$$D(M) = \Bigl\lfloor\frac{M^2}{4} \Bigr\rfloor$$

From which we have

$$ \lim_{n\rightarrow \infty} \mathbf E[d(X_n)] = \frac1M \Bigl\lfloor\frac{M^2}{4} \Bigr\rfloor.$$