Say I have two independent normal distributions (both with $\mu=0$, $\sigma=\sigma$) one for only positive values and one for only negatives so their pdfs look like:
$p(x, \sigma) = \frac{\sqrt{2}}{\sqrt{\pi} \sigma} exp(-\frac {x^2}{2 \sigma^2}), \forall x>0$ and
$p(y, \sigma) = \frac{\sqrt{2}}{\sqrt{\pi} \sigma} exp(-\frac{y^2}{2 \sigma^2}), \forall y<0$.
If I pluck samples from both and then take the average $ = \frac{x+y}{2}$ I would imagine the expected value of this average to be zero but I would imagine the variance would be less than the variance of the individual distributions because the averaging of a positive and negative number would "squeeze" the final distribution.
I think the correct way to calculate it is using the following integral.
$Var( \frac{x+y}{2}) = \frac{2}{ \pi \sigma^2} \int^{\infty}_{0} \int^{0}_{- \infty} \frac{(x + y)^2}{4}exp(-\frac {x^2}{2 \sigma^2}) exp(-\frac {y^2}{2 \sigma^2}) dx dy$
But I am not sure if I am over-simplifying it. Does that logic seem correct or am I missing something?
Thank you.
Edited to mention independence and correct formulae mistakes.
Your approach is workable (although the $\tfrac{1}{2\pi\sigma^2}$ should be $\tfrac{2}{\pi\sigma^2}$), but there's a much easier way @callculus pointed out. Since $X,\,Y$ are independent, $\operatorname{Var}(aX+bY)=a^2\operatorname{Var}X+b^2\operatorname{Var}Y=(a^2+b^2)^2\operatorname{Var}X$, so $\operatorname{Var}\tfrac{X+Y}{2}=\tfrac12\operatorname{Var}X=\sigma^2(\tfrac12-\tfrac{1}{\pi})$. (I've used a variance from here.)