I am reading the chapter about matrix completion in Algorithimic Aspects of Machine Learning by Ankur Moitra and have question about subspace coherence defined as
$\mu(U) = \frac{n}{r}\max_{1 \leq i\leq n }||P_U\mathbf{e}_i||_2^2 $,
where $U$ is r-dimensional subspace of $\mathbb{R}^{n}$, and $P_U$ is orthogonal projection onto $U$. I understand the bounds $1 \leq \mu(U) \leq \frac{n}{r}$, but don't get the sentence
It is easy to see that if we choose U uniformly at random, then $\mu(U) = \tilde{O}(1)$.
I would really appreciate if someone could provide more insights or perhaps walk me through the reasoning behind this statement.