Almost sure convergence of a compound sum of random variables

778 Views Asked by At

Let $(X_n)_{n \geq 1}$ be a sequence of independent random variables with the Bernouilli distribution $\mathcal{B}(p)$, $0 < p < 1$. For all $n \geq 1$, set $Y_n = X_n X_{n+1}$ and \begin{equation} S_n = \sum_{k=1}^{n} Y_k. \end{equation} I want to prove that $\frac{S_n}{n}$ converges almost surely to $p^2$ when $n \rightarrow \infty$.

The idea I had in mind was to use Kolmogorov's strong law of large numbers (if $Y_n$ are independent and identically distributed, then $\mathbb{E}(|Y_1|) < \infty \Longleftrightarrow \frac{S_n}{n} \rightarrow c $ almost surely, with $c = \mathbb{E}(Y_1)$ should either condition be true). One may observe that \begin{equation} \mathbb{E}(Y_1) = \mathbb{E}(X_1 X_2) = \mathbb{E}(X_1) \mathbb{E}(X_2) = p^2 \end{equation}
where I used the fact that $X_1$ and $X_2$ are independent. However doesn't seem possible to merely apply the theorem to $Y_n = X_n X_{n+1}$, as $Y_n$ are not independent. If we seperate the sum $S_n$ as follows \begin{equation} S_k' = X_1 X_2 + X_3 X_4 + \ldots + X_{2k-1}X_{2k} \end{equation} when $n = 2k$ and \begin{equation} S_k'' = X_2 X_3 + \ldots X_{2k}X_{2k+1} \end{equation} when $n = 2k+1$, then the parts of the sums are independent (and both sums, divided by $k$ converge to $p^2$ when $k \rightarrow \infty$). The problem I have with this sketch is that the $Y_k$ don't seem to follow a probability distribution. For example, one may check that \begin{equation} \mathbb{P}(X_1 X_2 = 0) = \mathbb{P}(X_1 = 0) + \mathbb{P}(X_2 = 0) = 2(1-p) \end{equation} and \begin{equation} \mathbb{P}(X_1 X_2 = 1) = \mathbb{P}(X_1 = 1) \mathbb{P} (X_2 = 1) = p^2. \end{equation} As you may see the sum of the two is strictly greater than $1$ (since $p^2 - 2p + 2 = (p-1)^2 +1 > 0$). How do I rectify this problem, and do you have any other ideas on how to solve this problem?

1

There are 1 best solutions below

2
On BEST ANSWER

$\mathbb{P}(X_1 X_2 = 0) = \mathbb{P}(X_1 = 0) + \mathbb{P}(X_2=0)$

This is not correct. Note that

$$\mathbb{P}(A \cup B) = \mathbb{P}(A) + \mathbb{P}(B)$$

holds only for disjoint sets $A$, $B$ (....and $\{X_1=0\}$ and $\{X_2=0\}$ fail to be disjoint). Instead, we have to use the more general statement

$$\mathbb{P}(A \cup B) = \mathbb{P}(A) + \mathbb{P}(B) - \mathbb{P}(A \cap B) \tag{1}$$

which holds for any two measurable sets $A$, $B$. By $(1)$ and the independence of $X_1$ and $X_2$,

$$\begin{align*} \mathbb{P}(X_1 X_2=0)& = \mathbb{P}(X_1=0) + \mathbb{P}(X_2=0) - \underbrace{\mathbb{P}(X_1=X_2=0)}_{\mathbb{P}(X_1=0) \mathbb{P}(X_2=0)} \\ &= 2(1-p)-(1-p)^2 = 1-p^2. \end{align*}$$