Calculating the expected value and variance of a sum of two different distributions

340 Views Asked by At

I have the following expression:

$$ Z = G - N + E $$

where

$$ G \sim \mathrm{Gamma}(k,\frac{\sigma^2}{k}) \\ N \sim \mathcal{N}(0,\dfrac{\alpha{}\sigma^2}{k^{2}}) \\ E \in \mathbb{R}_{+} $$

This expression is obtained when the estimator of the variance for an IID normal distribution was used under the presence of a signal. More specifically,

$$ Z = \dfrac{\sigma^2}{2k}\sum_{n=0}^{k-1}U_{n}^{2} - \dfrac{\sqrt{\alpha\sigma^2}}{k}\sum_{n=0}^{k-1}U_{n}s[n] +\dfrac{\alpha}{2k} $$

with $U \sim \mathcal{N}(0,1)$ and $\mathbf{s}^{T}\mathbf{s} = 1$.

I know that the expected value is a linear operator (i.e. $\mathbb{E}[X + Y] = \mathbb{E}[X] + \mathbb{E}[Y]$). The variance isn't a linear operator but it allows to decompose the expression further, to an extent.

Does this apply, however, in the case when the distributions are not similar? I have seen from some Googling that under the assumption of independence, the joint PDF was derived instead. I'm not sure if the two distributions are independent.

1

There are 1 best solutions below

2
On BEST ANSWER

What needs analysis to be calcuated is $E(G N)$, but they can be deduced from the model to lead to the need to evaluate terms of the kind $c_{ij}E(U_i^2U_j)$ which is zero because all odd moments of the standard normal distribution is zero. hence

$$ Var(G+N) = E((G+N)^2) - ((E(G+N))^2 = E(G^2) + E(N^2) - ((E(G+N))^2 $$