The question has to do with calculating the expected squared norm of a random projection. We have a 2D subspace $T := span\{U1, U2\}$ where $U1$ is a random vector uniformly distributed over unit vectors in $\mathbb{S}^{d-1}$ and $U2$ is a random vector uniformly distributed over unit vectors in $\mathbb{S}^{d-1}$ orthogonal to $U1$. We also have $X \sim \mathcal{N}(0, \Sigma)$, a vector in $\mathbb{R}^d$ where $\Sigma = v_1 v_1^T + v_2 v_2^T$ for orthogonal unit vectors $v_1, v_2 \in \mathbb{S}^{d-1}$. Then what is $\mathbb{E}\|\Pi_T X\|_2^2$, where $\Pi_T X$ is the orthogonal projection of X into $T$ (as a function of $d$)?
I know that the plane $W$ which maximizes $\mathbb{E}\|\Pi_W X\|_2^2$ is given by $span\{\mu_1, \mu_2\}$, where $X \sim \mathcal{N}(\mu_, Covariance)$. I know that when you randomly project a d-dimensional point $x$ down to the subspace $T$, the squared norm is going to be something like $\|x\|_2^2/d$. I just don't quite know how to show that.
Maybe this can help: http://www.cs.cmu.edu/~avrim/Papers/randomproj.pdf
I think it proves it for general case