Lemma 3.2.4 in the book high dimensional probability states that for two independent isotropic random vectors in $\mathbb{R}^n$(this means $EXX^T = I_n$), we have $E\langle X,Y\rangle^2 = n$, where $\langle X,Y\rangle = X^T Y$.
To prove this, we compute $E(E(\langle X,Y\rangle^2\mid Y))$. We get $E(\langle X,Y\rangle^2\mid Y) = \Vert Y\Vert_2^2$, and $E( \Vert Y\Vert_2^2) = n$.
What I don't understand is where the assumption of independence was used. The law of total expectation doesn't require the assumption of independence, so to me it seems the proof would be fine without the assumption of independence of $X$ and $Y$.
Vershynin, Roman, High-dimensional probability. An introduction with applications in data science, Cambridge Series in Statistical and Probabilistic Mathematics 47. Cambridge: Cambridge University Press (ISBN 978-1-108-41519-4/hbk; 978-1-108-23159-6/ebook). xiv, 284 p. (2018). ZBL1430.60005.



We need independence for $E(<X,Y>^2|Y) = \left|Y\right|_2^2$. In particular, for $\phi(x,y)$ and $g(x) = E(\phi(x,Y))$, $E(\phi(X,Y)|X) = g(X)$ if $X$ and $Y$ are independent.