Distribution of sum of product of normal random variables.

441 Views Asked by At

The distribution of a product $Z=XY$ of two normally distributed random variables is given by the product distribution https://mathworld.wolfram.com/NormalProductDistribution.html.

What is the distribution of $Q=\sum_{i=1}^n Z_i$, where $Z_i= X_i Y_i$, and $X_i,Y_i\sim \mathcal N(0,\sigma_i)$?

A nice answer was given in: Distribution of sum of product-normal distributions. by @wolfies, for the case where $\sigma_i=1$ (that is, all the $X_i,Y_i$ are identically distributed standard normal). The distribution can be expressed in terms of the modified Bessel function of the second kind. And as $n\to\infty$, the distribution approaches a normal distribution.

But I am interested in the case where the $\sigma_i$ are different. If no-closed form solution can be found, I am curious about when $p(Q)$ will approach a normal distribution. Is there an "effective" $n$ in terms of the $\sigma_i$? My first guess was the participation ratio $n_\text{eff} = (\sum_{i=1}^n \sigma_i^2)^2/\sum_{i=1}^n \sigma_i^4$.

Here is how I've approached it so far. The characteristic function of each $Z_i$ is given, I believe, by

$$\varphi_{Z_i}(t)=\frac{1}{\sqrt{t^2\sigma_i^2 + 1}}$$

So by independence the characteristic function of $Q$ is given by

$$ \varphi_Q(t) = \prod_{i=1}^n\varphi_{Z_i}(t) = \prod_{i=1}^n \frac{1}{\sqrt{t^2\sigma_i^2 + 1}}$$

And $p(Q)$ should be given by the inverse Fourier transform of $\varphi_Q(t)$. I am stuck trying to perform the inverse Fourier transform, and any help would be greatly appreciated!


edit:

@Henry gave a very nice answer regarding the asymptotic behavior of $p(Q)$ as $n\to\infty$. but I am still curious about the behavior of $p(Q)$ for $n$ small. Can $p(Q)$ be computed exactly? If not, how large must $n$ be before $p(Q)$ is approximately normal, as a function of the $\{\sigma_i\}$?

1

There are 1 best solutions below

4
On

If $X_i$ and $Y_i$ are independent and have zero mean then the variance of $X_iY_i$ will be $\sigma_i^2$ (or $\sigma_{X_i}\sigma_{Y_i}$ if these differ) so $\sum\limits_{i=1}^n X_iY_i$ will have zero mean and variance $s_n^2=\sum\limits_{i=1}^n \sigma_i^2$

The Central Limit Theorem will apply in the sense that $\frac1{s_n} \sum\limits_{i=1}^n X_iY_i$ will converge in distribution to $\mathcal N(0,1)$ as $n$ increases, providing that the $\sigma_i^2$ do not become too extreme, for example if they are bounded above and below by positive finite numbers, or if say the Lyapunov or Lindeberg conditions are met.