Continuous approximation of discrete Markov Chain

57 Views Asked by At

I have a tenuous grasp of stochastic processes and am looking for help solving a problem. Apologies for any abuse of notation or naivete. Thanks in advance!

Say we have a random walk on the lattice $\mathbb{N}^4$ , where the state at time $t$, $s^t$, is Markov. The walk begins at $s^0 = <0,0,0,0>$, and increments by $1$ in some dimension at each time step. Thus, $||s^t||_1 = t$. The transition probability distribution is $\Delta(s) = <(1-\frac{\alpha^{||s||_1}}{2})^2,(\frac{\alpha^{||s||_1}}{2}-\frac{\alpha^{2||s||_1}}{4}),(\frac{\alpha^{||s||_1}}{2}-\frac{\alpha^{2||s||_1}}{4}),(\frac{\alpha^{||s||_1}}{2})^2 >$, where $\alpha \in (0,1)$ is some fixed parameter for the walk, with the value of $\Delta(s)$ in each dimension corresponding to the probability of the walk moving one step in that spatial dimension (I'm sure this notation is wrong). I'm trying to say something generally about the behavior of this walk as $t$ becomes large (say, the probability of it landing in a certain region). I've done some MCMC simulations but I'm looking for an analytical approach.

Unless I'm missing something, calculating the precise probability of reaching some state requires some recursive calculation of reaching predecessor states, which can't be solved analytically for large $t$. But, can we approximate this as a continuous time walk, where $s$ now is in the space $[0,\infty)^4$? With this setup, we could treat $\Delta(s)$ as the gradient $\nabla(s)$.

Thanks a ton!

1

There are 1 best solutions below

1
On

First, let's slightly reformulate the problem:

The random variable $s_t$ you defined is a sum of independent random variables, meaning $$s_t=\sum_{i=1}^t J_i,\hspace{1cm}J_i=\begin{cases}\langle1,0,0,0\rangle, & p=(1-\alpha^i/2)^2 \\ \langle0,1,0,0\rangle,& p=\alpha^i/2-\alpha^{2i}/4 \\ \langle 0,0,1,0\rangle,& p=\alpha^i/2-\alpha^{2i}/4 \\ \langle 0,0,0,1\rangle, & p=(\alpha^i/2)^2\end{cases}$$ Hence, we are trying to understand the asymptotic behaviour of a sum of independent random variables. There are a few classical limit Theorems that come to mind here.

Firstly, one may consider the central limit theorem, though the observation that $J_i\stackrel{\mathcal D}\Rightarrow \langle 1,0,0,0\rangle$ as $i\to \infty$ quickly rules out this approach as the random variables become degenerate and thus we cannot expect a normal distribution in the limit.

As another approach, one may notice that the components of the $J_i$ are Bernoulli random variables and as such one may consider the Poisson Limit Theorem, though the random variables $J_i$ are not identically distributed and hence this approach also does not quite work. It may be worth mentioning that Le Cam's Theorem still applies and does allow us to perform a Poisson approximation, though not a very good one i think. This approach may work a lot better if one decomposes $s_n$ for large $n$ as $$s_n=s_K+\sum_{i=K+1}^n J_i$$ for a fixed large $K$. Then the first term $s_K$ has a Poisson-Binomial distribution and one can use Le Cam's Theorem to obtain a Poisson approximation (which is now a lot better for large $K$) for the remainder term.

Lastly, one may consider a variant of the Law of Large Numbers. Due to the observation that the $J_i$ converge to $\langle 1,0,0,0\rangle$, it is easily seen that $$n^{-1}s_n=\frac{1}{n}\sum_{i=1}^n J_i \rightarrow \langle 1,0,0,0\rangle \hspace{1cm} \text{ almost-surely}$$ Admittedly, this approximation is not very delicate, due to the strong normalization factor $n^{-1}$.

The approach via the Law of Large Numbers can be formulated in a "functional form", meaning $$X(t):=n^{-1}\sum_{i=1}^{n\cdot t} J_i \rightarrow t\cdot \langle 1,0,0,0\rangle$$ though i doubt this "functional" formulation is any more or any less helpful than the ordinary LLN here.

I hope the last two approaches give a good starting point. Personally, I find the second approach via the Poisson approximation the most promising.