Consider a periodic Markov process, of size n, with probability $p$ of rotating clockwise and with probability $q$ of rotating otherwise. Write a program in Python and simulate that Markov chain.
I am not understanding this Markov chain once I cannot identify the transition matrix. Since we have just two probabilities of going clockwise or counter-clockwise. Is this not a random walk instead?
Question:
How should I approach this problem?
Thanks in advance!
Let $p = q = 1/2.$ Suppose states $S = \{0, 1, 2, 3, 4\}$ arranged in a circle with $0$ adjacent to $4.$ At each step you move one state clockwise or one state counterclockwise (probability 1/2 each). Then you are correct that this is a random walk on the circle. But as for many random walks, this one is also a Markov chain.
I can show you how to program this random walk in R (in which
%%signifies modular arithmetic):If you write the transition matrix for this random walk, you will see that it is doubly stochastic, so that its stationary distribution is $\sigma = (1/5,\, 1/5,\, 1/5,\, 1/5,\, 1/5),$ as approximated by simulating the chain through 10,000 steps.
The chain is ergodic, so this discrete uniform distribution is also the limiting distribution. [If I had used integers mod 4 (instead of mod 5), then the chain would be periodic of period 2, hence not ergodic: it would be in an even numbered state at alternate steps.]
An ACF ('autocorrelation function') plot for the first 40 lags, shows that knowing the current state is of little help predicting the state a dozen steps later. (Google it if interested.)
Note: This $X$ chain can be used as an example to illustrate that a function of a Markov process is not necessarily Markovian. Suppose we can only observe whether the chain is or is not in state 0. Then the observable process has $Y_i = 0,$ if $X_i = 0$ and $Y_i = 1,$ otherwise. For the $Y$-process, knowing the current state is not full information. If $Y$ has just gone out of $0$ then it has a 50:50 chance of returning to $0$ at the next step. But if $Y$ left $0$ a two steps ago, then $X$ must now be in state $2$ or $3,$ so that $Y$ cannot return to $0$ on the next step. The $Y$-process can have more than Markovian one-step dependence. In particular, $$\frac 1 2 = P(Y_4=0|Y_3=1, Y_2 = 0) \ne P(Y_4=0|Y_3=1, Y_2=1, Y_1=0) = 0$$ violates the Markov property.