Deterministic sequence of probability measures

132 Views Asked by At

Suppose $\mu_0$ is a probability measure on the reals with finite first moment $\int_{-\infty}^\infty |x| \, \mu_0(dx) < \infty$ so that $m:= \int_{-\infty}^\infty x \, \mu_0(dx) \in \mathbb R$ is well defined: If $X_0\sim\mu_0$ is a random variable with law $\mu_0$, it has expectation $\mathbb E[X_0]=m$. Now, given $\mu_n$ for $n\in\mathbb N_0$, we construct the probability measure $\mu_{n+1}$ deterministically as follows. Taking $(U_n, V_n, W_n)$ i.i.d. each distributed according to $\mu_n$, let $\mu_{n+1}$ the law of $$ U_n + V_n-\ln(1+e^{V_n})+\ln(1+e^{W_n}). $$ In other words, $$ \mu_{n+1} := \mu_n \ast \mu_n \circ(x\mapsto x-\ln(1+e^x))^{-1} \ast\mu_n\circ (x\mapsto \ln(1+e^x))^{-1} ,\quad n\in\mathbb N $$ where $\ast$ denotes convolution of measures.

If $X_n\sim \mu_n$, it holds $\mathbb E[X_n]=2^n\, m$, so I wonder: If $m>0$, is it (generally) true that $F(t):=\mu_n((-\infty,t])\to 0$ as $n\to\infty$ for any $t\in\mathbb R$? Also, in the case $m=0$, can anything be said about the asymptotic behavior of $\mu_n$ as $n\to\infty$? Clearly, if $\mu_0 =\delta_a$ is the point measure at $a\in\mathbb R$, it holds $\mu_n=\delta_{2^n a}$. Otherwise, the variance $\text{Var}(X_{n+1})>\text{Var}(X_n)$ of $X_{n+1}\sim\mu_{n+1}$ is strictly larger than the variance of $X_n\sim\mu_n$.

1

There are 1 best solutions below

1
On BEST ANSWER

Let $A_n := \frac{1}{2^n}U_n$
Though the following result can be generalized, I'll stay with the condition that $\mu_0 \in L^2$ for the simplicity
Theorem 1
If $\mu_0 \in L^2$, we have $$ \mathcal{L}( 2^{cn}( A_n-m ) ) \longrightarrow \delta_{0}$$ for all $0<c<\frac{1}{2} $, where $\mathcal{L}(X)$ is defined as the law of the random variable $X \square $.
Demonstration
We have: $$\begin{array}{lcl}\text{Var}(A_{n+1})&\underbrace{=}_{\text{def of } \mu_{n+1}}& \dfrac{1}{4^{n+1}} \left[ \text{Var}(U_n)+\text{Var}\left\{ \ln( e^{W_n}+1 )-\ln( e^{-V_n}+1 ) \right\} \right]\\ &\le&\dfrac{1}{4^{n+1}} \left[ \text{Var}(U_n)+ \text{Var}\left\{ \ln( e^{W_n}+1 )-\ln( e^{-W_n}+1 ) \right\} \right]\\ &=& \dfrac{1}{4^{n+1}} \left[ \text{Var}(U_n)+\text{Var}(W_n) \right]= \frac{1}{2}\text{Var}(A_n) \end{array} $$ Where in the second line, I have used a variation of Chebyshev's inequality, which is: $$ \text{Var}( f(X)-g(Y)) \le \text{Var}( f(X)-g(X)) $$ for any two random variables $X, Y$ such that $X \sim Y$ and $f$ is increasing while $g$ is decreasing.

Thus $$\text{Var}(A_n) \le \dfrac{1}{2^n}\text{Var}(A_0)$$ Hence the conclusion $\square$.

Corollary 2
If $m>0$ and $\mu_0 \in L^2$, we have: $$F_{\mu_n}(t) \xrightarrow{ n \rightarrow +\infty} 0 $$ for all $t$. $\square$

Corollary 3
If $m=0$, $\mu_0 \in L^2$, then: $$\mu_{n}( 2^{cn}a, 2^{cn}b ) \rightarrow 1 $$ for $a<0<b$ and $c>1/2$ $\square$

Comment:

  • If the condition $L^2$ is relaxed, the same result can be found but more subtle treatment is needed, and some optimal transport inequalities might be involved.

  • I would be nice if OP can help me know where did you come up with this question.