Dependent increments?

216 Views Asked by At

Let us assume that $X_1,..., X_n$ are i.i.d exponential random variables with $\lambda$ each. Define $M = \max\{X_1,..., X_n\}$. If I'm interested in $\mathbb{E}M$, then I can divide $M$ into (independent?) incerements, namely $$ M = X_{(1)}^{(n)}+ X_{(1)}^{(n-1)} + ... + X_{(1)}^{(1)}, $$ where $X_{(1)}^{(n)}$ is $\min \{X_1,.., X_n\}$ and $X_{(1)}^{(n-1)}$ is the minimum of the rest $n-1$ random variables and so on. Due to the lack of memory property each $X_{(1)}^{(k)} \sim \mathcal{E}xp (k\lambda)$. Hence, $$ \mathbb{E}M = \sum_{k=1}^n\frac{1}{k\lambda}, $$ and $$ \mathbb{V}(M)=\sum_{k=1}^n\frac{1}{k^2\lambda^2}, $$ that means where $n \to \infty$ the variance converges to $(\pi/\lambda)^2/6$ but the expectation diverge. That is nonsense. What am I missing? Are those increments dependent and then I have to account for the co-variance that will "explode"? Thank you.

1

There are 1 best solutions below

2
On BEST ANSWER

You can think of the independent exponentials as wait times to the first click of $n$ independent Poisson processes. The maximum is the total time till the last one clicks.

Say the first $n-1$ have clicked. Then by memorylessness, the remaining time till the final click is exponential and independent of everything else. If the first $n-2$ have clicked, then the remaining time till the $n-1$-st one clicks is the minimum of two exponentials, independent of everything else. So continuing backward inductively, the time till the final click is a sum of $n$ independent variables, where the first is distributed like the minimum of $n$ exponentials, the second is distributed like the minimum of $n-1$ exponentials, and so on.

EDIT

I realize I forgot to answer your whole question. Hopefully it's clear from above that you can use independence in the way you wanted to and that your expressions are correct. There is no contradiction that the expectation diverges as $n\to\infty$ while the variance goes to a constant. Imagine you had $X_n,$ a sequence of normal random variables where $X_n$ has mean $n$ and variance one. The mean diverges while the variance converges. There is absolutely no problem with this.

I think you're confusing it with a case where a single random variable has infinite expectation but finite variance which would indeed be impossible. But here each of the random variables $M_n = \max(X_1,\ldots, X_n)$ has a finite expectation and a finite variance. It's just that as the sequence goes on the means get bigger and bigger while the variances don't.