Source of faulty reasoning in expectation of product of random variables?

59 Views Asked by At

For iid $\xi_i>0$, with $\mathbb E[\xi_i]=1$, what is $\mathbb E[\prod_i^M\xi_i]$?

Approach 1: $\mathbb E[\prod_i^M\xi_i]=\prod_i^M\mathbb E[\xi_i]=1$.

There is another approach for $M\gg1$ with presumably faulty reasoning that gives a different result:

Approach 2: $\mathbb E[\prod_i^M\xi_i]=\mathbb E[\exp(\sum_i^M\log(\xi_i)]\approx \mathbb E[\exp(M\mathbb E[\log(\xi_i)])]=\exp(M\mathbb E[\log(\xi_i)])$. Take any of the many distributions of the $\xi_i$ where the expectation and the logarithm do not commute so that $\mathbb E[\log(\xi_i)]$ is some finite number. Then $\mathbb E[\prod_i^M\xi_i]\neq 1$.

Where is the faulty reasoning? (n.b. It could be something pretty basic. I am no expert.)

Additional notes: Regarding Approach 2, Central limit theorem arguments imply that $X_M=\prod_i^M\xi_i$ is log-normally distributed for $M\gg1$ so that $\mathbb E[X_M]=\exp(\mu +\sigma^2/2)$, where $\mu$ and $\sigma^2$ are the mean and variance of $\log X_M=\sum_i^M \log\xi_i$. Since the variance of the normal distribution in logspace will grow with $M$ so will the mean of $E[X_M]$. Again it does not seem to equal 1 in this line of reasoning.

Thanks!

Edit: Angela answered that this is a convergence issue. Let's look at a convergent case then: $X_i=(\prod_i^M\xi_i)^\frac{1}{M}$.

Now, it is not clear that Approach 1 can be carried out.

Approach 2 now seems to go through: $\mathbb E[(\prod_i^M\xi_i)^\frac{1}{M}]=\mathbb E[\exp(\frac{1}{M}\sum_i^M\log(\xi_i)]\approx \mathbb E[\exp(\mathbb E[\log(\xi_i)])]=\exp(\mathbb E[\log(\xi_i)])$.

As an example, if $\xi_i$ is gamma-distributed with unit-valued scale parameter and shape parameter $k$, $E[\log(\xi_i)]=\psi(k)$, where $\psi(k)$ is the digamma function.

1

There are 1 best solutions below

1
On

The problem is with the step $\mathbb{E}[\exp \sum_{i=1}^M \log \xi_i]\approx \mathbb{E}[\exp M\mathbb{E}[\log\xi_i]]$.

As $M\rightarrow \infty$ the left and right hand sides of the approximate equality do not converge to each other except in the special case where the expectation and logarithm commute.