Prove $\frac{Z_1+Z_2+..Z_n}{\log(n+1)}\rightarrow1$ a.s when $Z_n\sim \mathrm{Poi}(\lambda _n)$

135 Views Asked by At

Let $Z_1,Z_2,..$ be independent r.v s.t $Z_n \sim \mathrm{Poi}(\lambda _n)$ and $\lambda _n = \log (n+1)-\log n$ (log with base 2). Let $T_n=Z_1+Z_2+...+Z_n$
Prove $\frac{T_n}{\log(n+1)}\rightarrow1$ a.s

hint: first prove convergence in probability.

This is a question from a test in probability theory. I havn't found the definition for convergence in probability in the net so I hope I translated it right. This is the definition:

$X_n\overset{\mathbb{P}}{\rightarrow}X$ if for all $\epsilon>0 :$$\ \ \ \mathbb{P}(|X_n-X|>\epsilon)\rightarrow 0$

I know there is a solution using Kronecker’s lemma and the strong law under condition of second moment.

But I want to know what's the solution using the hint.

Thank you,

3

There are 3 best solutions below

0
On BEST ANSWER

As the hint says, first let us estimate $P(|\frac{T_{n}}{\log(n+1)}-1|>\epsilon)$ for all $\epsilon>0$ to show convergence in probability.

You have that $E(T_{n})=\log(n+1)$

So $$P(|\frac{T_{n}}{\log(n+1)}-1|>\epsilon)=P(|T_{n}-\log(n+1)|>\log(n+1)\epsilon)\leq \frac{Var(T_{n})}{\log^{2}(n+1)\epsilon^{2}}$$ by using Chebycheff's Inequality.

Now, it is a known fact that variance of a $Poi(\lambda)$ variate is $\lambda$

So $Var(T_{n})=\sum_{k=1}^{n}\log(k+1)-\log(k)=\log(n+1)$

So $$P(|\frac{T_{n}}{\log(n+1)}-1|>\epsilon)\leq \frac{1}{\epsilon^{2}\log(n+1)}\to 0$$ as $n\to\infty$. Thus we have shown convergence in Probability.

Now, the trick to show almost sure convergence is by first observing that $\frac{T_{n}}{\log(n+1)}$ is a sequence of positive random variables and not only that, $T_{n+1}\geq T_{n}$ almost surely. Hence, we proceed by a Sandwiching argument .

So first look at the real sequence $n_{k}=2^{k^{2}}$

Then we have $$P(|\frac{T_{n_{k}}}{\log(n_{k}+1)}-1|>\epsilon)\leq \frac{1}{\log(2^{k^{2}}+1)}$$ and we have $\sum_{k}\frac{1}{\log(2^{k^{2}}+1)}<\infty$.

Thus by Borel-Cantelli Lemma, $\frac{T_{n_{k}}}{\log(n_{k}+1)}\to 1$ almost surely.

Now, let $m_{k}$ be an arbitrary sequence of natural numbers tending to $\infty$.

Then for each $k$, find by induction $k_{l}$ such that $2^{k_{l}^{2}}\leq m_{k}< 2^{k_{l}^{2}+1}$

Then, you have $$\frac{T_{m_{k}}}{\log(m_{k}+1)}\leq \frac{T_{2^{k_{l}^{2}+1}}}{\log(2^{k_{l}^{2}}+1)}$$

But the sequence $$\frac{T_{2^{k_{l}^{2}+1}}}{\log(2^{k_{l}^{2}}+1)}=\frac{T_{2^{k_{l}^{2}+1}}}{\log(2^{k_{l}^{2}+1}+1)}\cdot \frac{\log(2^{k_{l}^{2}+1}+1)}{\log(2^{k_{l}^{2}}+1)}$$

But we have that $\displaystyle\frac{T_{2^{k_{l}^{2}+1}}}{\log(2^{k_{l}^{2}+1}+1)}\xrightarrow{a.s.} 1$ and $\displaystyle\frac{\log(2^{k_{l}^{2}+1}+1)}{\log(2^{k_{l}^{2}}+1)}\to 1$ and this holds for an arbitrary sequence $m_{k}$.

Thus $\displaystyle\lim\sup \frac{T_{n}}{\log(n+1)}\leq 1$ almost surely.

Now similarly for $m_{k}$, find indices $k_{l}$ such that $2^{k_{l}^{2}-1}\leq m_{k}\leq 2^{k_{l}^{2}}$ (I have not relabled the sequences to avoid too much unnecessary notations). And repeat the procedure to show that

$\displaystyle\lim\inf \frac{T_{n}}{\log(n+1)}\geq 1$ almost surely by sandwiching from below.

This shows that $\displaystyle\dfrac{T_{n}}{\log(n+1)}\xrightarrow{a.s.}1$.

$\blacksquare$

Another approach is by extending Davide's idea. But this approach requires familiarity with martingale theory. Essentially, this approach exploits the theorems/facts and the hardwork that goes into martingale theory to give us a very very short and cute proof.

Edit: Actually, this doesn't work really as $\frac{T_{n}-\log(n+1)}{\log(n+1)}$ is not a supermartingale as I claimed as $T_{n}-\log(n+1)$ is not necessarily positive. But, I wonder if this method can be rescued as it seems so close to be true. So I'll leave it here if someone can find a way out.

Notice that $T_{n}-\log(n+1)$ is a discrete paramaeter martingale as $T_{n}-\log(n+1)=\sum_{k=1}^{n}Z_{k}-E(Z_{k})$, i.e. sum of iid mean $0$ random variables.

I claim that $\bigg\{\frac{T_{n}-\log(n+1)}{\log(n+1)}\bigg\}_{n\geq 2}$ is an $L^{2}$ bounded supermartingale.

That is, we already have that $Var(\frac{T_{n}-\log(n+1)}{\log(n+1)})=\frac{1}{\log(n+1)}\to 0$. So $\frac{T_{n}-\log(n+1)}{\log(n+1)}$ converges in $L^{2}$ to $0$.

Now if $\mathcal{F}_{n}=\sigma(T_{1},...,T_{n})$. Then corresponding to this filtration, we have

\begin{align} E(\frac{T_{n}-\log(n+1)}{\log(n+1)}|\mathcal{F}_{n-1}) &=\frac{1}{\log(n+1)}E(T_{n}-\log(n+1)|\mathcal{F}_{n-1})\\ &=\frac{T_{n-1}-\log(n)}{\log(n+1)}\leq \frac{T_{n-1}-\log(n)}{\log(n)}\end{align}

Thus, $\bigg\{\frac{T_{n}-\log(n+1)}{\log(n+1)}\bigg\}_{n\geq 2}$ is an $L^{2}$ bounded (and hence uniformly integrable) supermartingale and by Martingale Convergence theorem, it converges in $L^{2}$ and almost surely to an $L^{2}$ random variable $X_{\infty}$.

But, since we already have that $\displaystyle \frac{T_{n}-\log(n+1)}{\log(n+1)}\xrightarrow{L^{2}} 0$, we have that $X_{\infty}=0$ almost surely.

So $\displaystyle \frac{T_{n}-\log(n+1)}{\log(n+1)}\xrightarrow{a.s.} 0$

Hence $\frac{T_{n}}{\log(n+1)}-1\xrightarrow{a.s.} 0\implies \frac{T_{n}}{\log(n+1)}\xrightarrow{a.s.}1$

1
On

This problem is wrong as stated. Indeed:

Write $Y_n = \frac{\sum_{i=1}^n Z_i}{\log(n+1)}$. Then

$$\mathbb{E}[Y_n] \ = \ \log^{-1}(n+1)\times\mathbb{E}\Big[\sum_{i=1}^n Z_i\Big]$$ $$= \ \log^{-1}(n+1)\times\sum_{i=1}^n \big(\log(i+1)-\log(i)\big) \ = \ \log^{-1}(n+1)\times \log(n+1) \ = 1.$$

Likewise using the independence of the $X_i$s it is easy to check that the variance of $Y_n$ is bounded. So there is no way that $Y_n$ can converge to $0$ even in probability.

2
On

I think that the limit should be one instead of zero. If $X$ and $Y$ are two independent random variables both having Poisson distributions with parameters $\mu_1$ and $\mu_2$, then $X+Y$ has a Poisson distribution with parameter $\mu_1+\mu_2$. This allows to find the distribution of $T_n$.

Then one can compute $\mathbb E\left[\left(T_n/\log(n+1)-1\right)^2\right]$ and show that it goes to $0$.

In order to treat the almost sure convergence, it suffices to find an increasing sequence $(n_k)$ such that $$ \sum_{k\geqslant 0}\mathbb E\left[\max_{n_k\leqslant i\leqslant n_{k+1}-1}\left(\frac{T_i-\mathbb E[T_i]}{\log i}\right)^2\right]<\infty. $$ On e can find an upper bound for the expectation, by using $\log i\geqslant \log n_k$ then using a maximal inequality.