Random walk with gaussian steps: Variable variance

103 Views Asked by At

Context

Let $(x_t)_{t\in \mathbb{N}}\in X^\mathbb{N}$ (with $X$ some euclidian space, that may be assumed to be $\mathbb{R}$ for a first analysis) be a dynamical system defined by:

  • $x_0 \in X$
  • $x_{t+1} \sim \mathcal{N}(x_t, V(x_t))$

where $V:X\mapsto \mathbb{R}^+$ is a predefined positive function, that can be assumed to be continuous (or even more).

Question:

What properties do such have systems have ?

I am particularly interested in possible convergence results, even if more assumptions are needed.

Remarks:

Intuitively the system would tend to "converge" to a zone of (at least locally) lowest value of $V$. In particular, in the case of fixed points (i.e. V is null in these points), I hope to get something like the classical results on the convergence of finite space - discrete time Markov chains.

Would someone have some idea ?

1

There are 1 best solutions below

0
On

If $V$ is continuous, it must be (locally) lower bounded by some positive number. I think that, if $V$ is bounded both below and above by positive numbers, then nothing changes compared to $V$ constant. I am not too familiar about how 'convergence' would be defined on continuous Markov chains, but I'll argue it for a discrete one.

Consider a random walk on $\mathbb{Z}$, where at each step you have a probability of $\frac{1}{2}$ of adding 1 and $\frac{1}{2}$ of decreasing by 1. It is well-known that this is a null-recurrent Markov chain: The probability of ending up back where you started is 1, but the expected time of reaching your starting position is $\infty$. (I suppose 'recurrence' is close to what you are looking for)

Now suppose we modify the random walk a bit. Instead of taking one step up or down with probability $\frac{1}{2}$, we take $N$ steps at once, each with probability $\frac{1}{2}$ up or down. Now the probability of passing your starting point is still 1, and the expected number of steps before you do so is still $\infty$. However, as $N$ becomes large, the difference between steps approaches a normal distribution by the central limit theorem.

Finally, consider the case where $N$ is a function of your current position. Obviously, this is still null-recurrent. And this Markov chain should behave very similar to the continuous version (unless $N$ becomes tiny). But I'd expect that you can show with similar (continuous) techniques that the probability of crossing your starting point is 1 and the expected number of steps is infinite, if $V$ is lower bounded by a positive number.

If we drop the assumption of the upper bound, something interesting happens. Suppose $V(x)\geq |x|$. Then the probability of crossing $0$ is always at least the probability that a normal distribution is larger than once its standard deviation ( about 16%). Hence the number of steps to cross $0$ is geometrically distributed, and therefore its expectation is finite. Now $0$ is ‘positively recurrent’ (not sure if this notion exists for continuous Markov chains).

Finally, if we drop the assumption that $V$ is positive, we actually can converge (almost surely). Take $V(x)=|x|$. If we are at any point $x_t$, the expected value of $\ln(|x_{t+1}|)-\ln(|x_t|)$ is always $\int_{-\infty}^{\infty}\frac{\ln|x+1|}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx\approx -0.208$. So the logarithm of $x_t$ on average decreases in expectation $-.208$ every step. Hence we converge to $0$ almost surely. However, there does not seem to be a simple way to decide convergence. It doesn't converge for $V(x)=2|x|$, for example.