Distribution continuity of an AR(1) process

210 Views Asked by At

Let $\epsilon_n$ be i.i.d. random variables with mean $0$ and finite positive variance.

Let $X=\sum_{k=0}^\infty \rho^k \epsilon_k$, where $0<|\rho|<1$. The series converges a.s. by the variance criterion.

My question is: can one prove or disprove the claim $P(X=x)=0$ ($X$ has a continuous distribution)?

Remark 1: The motivation comes from understanding the distribution property of the stationary solution of the AR(1) equation: $X_n=\rho X_{n-1}+\epsilon_n$, which has the same distribution as $X$ above.

Remark 2: If $\epsilon_n$ has a continuous distribution, then the claim can be shown as follows: note that $X$ has the same distribution as $\rho X+\epsilon$, where $\epsilon$ is independent of $X$ and has the same distribution as $\epsilon_n$. Then by independence (disintegration), $$ P(X=x)=P(\rho X+\epsilon=x)=\int P(\epsilon=x-\rho u)~ dP_X(u)=0. $$ where $P_X$ is the distribution of $X$.

Remark 3: A positive example of the claim when $\epsilon_n$ is discrete: if $P(\epsilon_n=\pm 1)=1/2$, $\rho=1/2$, then it is well-known that $X$ is uniformly distributed on [-2,2].

1

There are 1 best solutions below

1
On BEST ANSWER

Your conjecture is correct. In fact, a more general result is true: the distribution is pure (either absolutely continuous or singularly continuous w.r.t. the Lebesgue measure). You can learn much more from the book Stochastic Models with Power-Law Tails: The Equation $X = AX + B$ by Buraczewski, Damek, and Mikosch; here is a modification of the proof of Proposition 2.5.2 therein for your situation.

Let $(\epsilon_k,k\ge 0)$ be iid with non-degenerate distribution and $\mathrm E[\log (|\epsilon_0|+1)]<\infty$. Then for any $\rho\in(0,1)$ the series $X = \sum_{k=0}^\infty \rho^k \epsilon_k$ converges almost surely, and its distribution is atomless, e.g. for any $x\in\mathbb{R}$, $\mathrm P(X=x) = 0$.

I'll skip the convergence part, going directly to the atomlessness.

By way of contradiction, assume that for some $x\in \mathbb{R}$, $\mathrm P(X=x) > 0$. Clearly, the maximal value $p^* = \max_{x\in \mathbb{R}}\mathrm P(X=x)$ is attained at finitely many points, say, on the set $S$. Observe that $$X = \rho X' + \epsilon_0,$$ where $X' \overset{d}{=} X$ is independent of $\epsilon_0$. It follows that the distribution $\epsilon_0$ has an atom too, say, at a point $a\in \mathbb R$ (otherwise, the distribution of $X$ would be continuous, as explained in Remark 2). Also, for each $x\in S$, $$ p^* = \mathrm P(X=x) = \int_{\mathbb{R}}\mathrm P\big(X=\rho^{-1}(x-y)\big)dF_{\epsilon_0}(y)\overset{*}{\le} \int_{\mathbb{R}}p^* dF_{\epsilon_0}(y)=p^*, $$ so the inequality * should be and equality, whence $\mathrm P\big(X=\rho^{-1}(x-y)\big)=p^*$ for almost all $y$ modulo the distribution of $\epsilon_0$. In other words, for any $x\in S$, $\rho^{-1}(x-\epsilon_0)\in S$ almost surely, in particular, $\rho^{-1}(x-a)\in S$. Denote $x_*=\min S, x^* = \max S$ and observe that $\rho^{-1}(x_*-a)\in S$, $\rho^{-1}(x^*-a)\in S$, and $\rho^{-1}(x^*-a) - \rho^{-1}(x_*-a) = \rho^{-1}(x^*-x_*)$. Since $\rho^{-1}>1$, we must have $x_*=x^*$, so $S = \{x_*\}$. But then $\rho^{-1}(x_* - \epsilon_0) = x_*$ almost surely, so $\epsilon_0 = x_*(\rho - 1)$ almost surely, contradicting the assumption of non-degeneracy.