Estimating component variance for a sum of random variables

167 Views Asked by At

Say I have two zero mean single variate independent random variables $X$ and $Y$, and a third variable $Z = X + Y$. I can draw samples $z_i$ with $i = 1..n$ from $Z$ and I know $Var(Y)$. How can I estimate $Var(X)$ based on the measurements $z_i$?

A trivial estimator of $\Sigma = Var(X)$ would be $\hat{\Sigma} = Var(z_i) - Var(Y)$. The problem is that $\hat{\Sigma}$ could actually be negative based on the drawn samples $z_i$. Is there an estimator (possibly biased) which always results in a valid variance?

I have a related question on the stats SE site.

1

There are 1 best solutions below

2
On BEST ANSWER

Generally, the variance of the sum of two independent random variables is the sum of the variances of those two random variables (which is why the variance is so useful in this way, rather than the standard deviation).

So take the samples of $z_i$ and calculate their variance. Subtract $Var(Y)$ from this to get your estimate of $Var(X)$.

As for your update, the only way $\hat \Sigma$ could be negative is if $Var(Z)$ is smaller than $Var(Y)$, and I'm pretty sure that can't happen if $Z$ is the sum of $X$ and $Y$ and those two variables really are independent. If $Var(z_i)$ is smaller than $Var(Y)$ because it is also smaller than $Var(Z)$ due to sampling errors, keep sampling until you get a valid variance.