Given a standard gaussian random matrix $X \in \mathbb R^{n \times n}$, and $M = \frac{1}{n} X^T X$, and given: $$h \equiv \left\lVert M - I_n \right\rVert_{\max} \equiv \max_{i,j} |M_{i,j} - \delta_{i,j}| $$ I am attempting to show that $h$ converges to $0$ in probability as $n$ grows to infinity.
Numerically, on this notebook with numpy I find indeed that it seems to converge towards $0$ as $n$ grows.
Now, to prove it, I know that there are some results regarding the local isotropic law as in this paper
So basically, I am wondering if I could use the resolvent $R(z) = (M - zI_n)^{-1}$ and maybe for instance use the fact that $-z(zR(z) + I) \underset{|z| \to \infty}{\to} M$ but I believe the bounds provided in this paper are mostly useful for the neighbourhood of the spectrum of M.
On the other hand, I believe I could potentially use the fact that for $i \neq j$, $M_{i,j} \sim \mathcal N(0,1)$ when $n$ is large enough and try to consider the probability $P( \max_{i \neq j} |M_{i,j}| \leq \alpha)$ for some $\alpha$, but here it is not so easy as the $M_{i,j},M_{k,l}$ have some covariance.
Would you know another method to solve this properly?
Thank you in advance!
Here is the sketch of an argument to show convergence in probability. Fix a small $\varepsilon>0$. First of all, by a union bound,
$$\mathsf{P}\left(\max_{i,j}\left|M_{ij}-\delta_{ij}\right|>\varepsilon\right)=\mathsf{P}\left(\exists i, j : \left|M_{ij}-\delta_{ij}\right|>\varepsilon\right)\le \sum_{i,j}\mathsf{P}\left(\left|M_{ij}-\delta_{ij}\right|>\varepsilon\right).$$
Hence, we only need to compute a good upper bound on the tails of $M_{ij}=\frac{1}{n}\sum_{k=1}^n X_{ik} X_{kj}$. Fix $i\neq j$ -- we can apply a similar reasoning when $i=j$. The random variables $(X_{ik} X_{kj})_k$ are independent and subexponential. Therefore, by Bernstein inequality, there is a universal constant $K>0$ such that
\begin{align} \mathsf{P}\left(|M_{ij}|>\varepsilon\right)\le 2\exp(-Kn\varepsilon^2). \end{align}
You can check that when you sum these inequalities for $i,j\in [n]$, provided that $\varepsilon>0$ is a small enough constant, you get an expression that goes to $0$ as $n\to\infty$.