What is the Steady-State of this Continuous-Space Discrete-Time Markov Chain when $k\leq 1/2$?

101 Views Asked by At

So, I asked a question earlier about an interesting Markov chain that I found, and I learnt that when $k\leq\frac12$, we start getting an ordinary distribution on a measureable set.

But I was wondering, what exactly is this steady-state distribution - so my current way of thinking of it, is that I start with the uniform distribution over $(0,1)$ and then I keep running the following intuitive recipe:

  • At given point $x$ take an infinitesimal slice to $x+\delta x$ and transport $1/p$ of that mass to $k+(1-k)x$ to $k+(1-k)(x+\delta x)$ and take the rest to $x(1-k)$ to $(x+\delta x)(1-k)$.
  • Repeat indefinitely

I don't really have the analysis tools to think about what this does in various scenarios, I'm particularly interested in what happens when $k$ is pretty small, since my simulations show that I get something that looks like a Beta distribution - but there appears to be a lot of complex behaviour hiding in this idea that I would like to know more about, if people can explain it or point to where its been discussed before.

Thanks again for all your help!

2

There are 2 best solutions below

0
On BEST ANSWER

So, I think a rigorous answer drops out of SmallDeviation's answer and John Dawkin comment beneath that. A good way to define the Markov Chain in question is:

$$X_t=(1-k)^t\cdot X_0+\sum_{j=1}^{t} (1-k)^{j}\epsilon_{j}$$

Where $\epsilon_j$ are Bernoulli variables with mean $p$. So, as $t\to \infty$ we have:

$$X_\infty = \sum_{j=1}^\infty (1-k)^j\epsilon_j$$

This general form when $\mathbb{E}[\epsilon_j]=1/2$ and $λ=1-k$ is called a Bernoulli convolution and it is a well studied object.

Theorem 1 (Shmerkin). The set $\{ k \in (1/2, 1) : X_\infty\text{ is singular}\}$ is of 0 Hausdorff dimension.

Theorem 2 (Solomyak). The Bernoulli convolution $X_\infty$ is absolutely continuous and has density in $L^2$ for almost every $k\in(0,1/2)$

This turns out to be a pretty rich subject of research in itself, so to find out if this approximates a normal distribution or a beta distribution seems silly, although that is what it looks like.

1
On

You can write the master equation of your Markov chain. For the pdf $f$, it is: $$ f_{n+1} = T[f_n]\\ T[f] = \frac{p}{1-k}f\left(1-\frac{1-x}{1-k}\right)1_{(k,1)}(x)+\frac{1-p}{1-k}f\left(\frac{x}{1-k}\right)1_{(0,1-k)}(x) $$ Your stationary pdf is just a fixed point of $T$. You can look at the asymptotics as $x\to0^+,1^-$. Assuming a power law: $$ \begin{align} f(x)&\sim x^{\alpha-1} &&x\to 0^+ \\ f(x)&\sim (1-x)^{\beta-1} &&x\to 1^- \end{align} $$ you get from the functional equation: $$ \begin{align} \alpha &= \frac{\ln(1-p)}{\ln(1-k)} & \beta &= \frac{\ln p}{\ln(1-k)} \end{align} $$ so instead of checking if any beta function works, you just need to check it for: $$ f(x)\propto x^{\alpha-1}(1-x)^{\beta-1} $$ You can check analytically and numerically that this is not the case.

Hope this helps.