With $X \sim Unif(0,1)$ what is the limit of $\frac{n}{x_1^{-1} + \cdots + x_n^{-1}}$

140 Views Asked by At

I am confused as to how I can tackle this question:

With $X \sim Unif(0,1)$ what is the limit of $\frac{n}{x_1^{-1} + \cdots + x_n^{-1}}$.

My assumption is that is $0$. but I would like to show that this limit almost surely converges to it. I started with characteristic functions: $$\mathbb{E}exp({it\frac{n}{x_1^{-1} + \cdots + x_n^{-1}}})$$ I however do not see how to expand this.

Then I thought, well... take $X = \{X(\omega_1), X(\omega_2) , \cdots\}$ and $N = \{\#\{X(\omega_i) < 1\} < \infty , \#\{X(\omega_i) = x\} = \infty , \forall i \in \mathbb{N}\}$. Where (I presume, but need to prove) that $\mathbb{P}(N) = 0$. So take any sequence in $N^c$ which has probability one of occuring, then I am not quite sure how to continue.

One main difficulty I find, is that $\mathbb{E}X_i^{-1} =\infty$ so I can't use the law of large numbers.

Thank you for the insight!

2

There are 2 best solutions below

2
On BEST ANSWER

$\frac{n}{\frac{1}{X_1}+\ldots+\frac{1}{X_n}}$ is the harmonic mean of $X_1,\ldots,X_n$, hence by assuming that $X_k$ is uniformly distributed over $(0,1)$ and $X_1,\ldots,X_n$ are independent we have that the PDF of $\frac{n}{\frac{1}{X_1}+\ldots+\frac{1}{X_n}}$ is supported on $(0,1)$ as well.

$$\mathbb{P}\left[\frac{n}{\frac{1}{X_1}+\ldots+\frac{1}{X_n}}\geq\frac{1}{M}\right]=\mathbb{P}\left[\frac{1}{n}\left(\frac{1}{X_1}+\ldots+\frac{1}{X_n}\right)\leq M\right] $$ and if $X_k$ is uniformly distributed over $(0,1)$ the PDF of $\frac{1}{X_k}$ is supported on $(1,+\infty)$ and given by $\frac{1}{x^2}$, hence the expected value of $\frac{1}{X_k}$ is unbounded. In particular for any $M\gg 1$ the limit of the RHS as $n\to +\infty$ is zero, hence the limit distribution is a Dirac $\delta$ as conjectured.


About a similar problem: if $X_1,\ldots,X_n$ are uniformly distributed over $(0,1)$ and independent, the PDF of their geometric mean is supported on $(0,1)$ and given$^{(*)}$ by $\frac{(n+1)^{n+1}}{n!}\left(-a\log a\right)^n$.
This is a unimodal distribution with constant mode $\frac{1}{e}$ and mean $\left(1-\frac{1}{n+2}\right)^{n+1}$.

$(*)$ This can be shown by computing the CDF through some change of variables, then differentiating it, or through the following approach. Let $g(x)$ be the PDF of $\text{GM}(X_1,\ldots,X_n)$. We have $$ \int_{0}^{1} x^h g(x)\,dx = \mathbb{E}[X_1^{h/n}\cdot\ldots\cdot X_n^{h/n}]=\frac{1}{(1+h/n)^n}, $$ hence $$ \mathcal{L}(g(x))(s) = \sum_{h\geq 0}\frac{(-1)^h s^h}{h!(1+h/n)^n} $$ and by inversion $$ g(x)=\left(\mathcal{L}^{-1}\sum_{h\geq 0}\frac{(-1)^h s^h}{h!(1+h/n)^n}\right)(x).$$

0
On

I was talking about the following theorem which you can find in Durrett's Probability: Theory and Examples (the chapter on law of large numbers).

Theorem. Let $Y_1, Y_2,...$ be i.i.d. with $E(Y_i^+) =\infty$ and $E(Y_i^-) <\infty$. Let $S_n=Y_1+\cdots+Y_n$, then $S_n/n\to \infty$ a.s.

Define $Y_i=1/X_i$ where $X_i\sim\text{Unif}(0,1) $ i.i.d. Verify that these $Y_i$'s satisfy the theorem above. Hence
$$\frac{Y_1+\cdots+Y_n}{n}\to \infty \ \ \ \text{a.s.}$$ But you are interested in the reciprocal of that, we conclude that $$\frac{n} {Y_1+\cdots+Y_n}\to 0\ \ \ \text{a.s.}$$