My question is: do you know any examples when $X$ and $Y$ are both normally distributed, but the two dimensional vector $(X,Y)$ is not?
I found some example in the books, but I don't understand it. The example is:
Let us assume that $f_1$ and $f_2$ are both the densities of $2D$ normal distribution with $0$ expectations and $1$ as variances (standard normal, but in 2D). $f_1$ have correlation coefficient $q_1$, $f_2$ have $q_2$. $q_1\neq q_2$.
Then $f=(f_1+f_2)/2$ is density of 2D vector which is not the density of 2D normal distribution, but its marginal densities are densities of 1D normal.
I can't really prove that.
I need to integrate that: $\int_{-\infty}^{\infty}\frac{1}{4\pi \sqrt(1-q_1^2)} e^{\frac{-1}{2(1-q_1^2)}}$ $ e^ {(x^2+y^2-2q_1xy)} +\frac{1}{4\pi \sqrt(1-q_2^2)} e^{\frac{-1}{2(1-q_2^2)}}$ $ e^ {(x^2+y^2-2q_2xy)}\,dx$
But I don't know how to handle this.
So my two questions are:
1) How to prove that example?
2) Does anybody know another example, where the $X$ and $Y$ are 1D normal, but $(X,Y)$ is not?
To understand the example, you should sketch or imagine the graph. The resulting distribution is the "mixing" of two zero mean normal variables with different correlations (mixing of random variables = weighted average of densities). To verify that the marginal are gaussians, you only need to realize that mixing and marginalization are interchangeable, by linearity; i.e, if the joint density is given by
$$f_3(X,Y)=\alpha f_1(X,Y)+ (1-\alpha) f_2(X,Y)$$
then the marginals (let's call they $g$) are then
$$g_3(X)=\int f_3(X,Y) dY = \alpha \int f_1(X,Y) dY+ (1-\alpha) \int f_2(X,Y) dY =\\=\alpha g_1(X) + (1-\alpha) g_2(X) $$
Hence, the marginal of the mixing is the mixing of the marginals. Now, a mixing of 1D normals is not in general normal. But here we have that $g_1(X)=g_2(X)=N(0,1)$, henge $g_3(X)$ is also $N(0,1)$