I came across the following in a book:
where the step before this showed that $\textbf{K}_n(l)=\frac{l}{n}\pmb{\Lambda}^{1/2}\textbf{Y}_1\left(l\textbf{I}_n-\frac{1}{n}\textbf{X}_2^*\textbf{X}_2\right)^{-1}\textbf{Y}_1^*\pmb{\Lambda}^{1/2}$.
From what I understood, due to the assumptions made, the matrix $l\textbf{I}_n-\frac{1}{n}\textbf{X}_2^*\textbf{X}_2$ is invertible and therefore $\left(l\textbf{I}_n-\frac{1}{n}\textbf{X}_2^*\textbf{X}_2\right)^{-1}$ has bounded operator norm. The authors then go on to find the limit of $\textbf{K}_n(l)$ using the law of large numbers.
My questions are:
- Why is the operator norm of a singular matrix unbounded?
- What implication does the boundedness/unboundedness of the operator norm have for the law of large numbers in this case?
Edit: $\textbf{S}_{22}:=\frac{1}{n}\textbf{X}_2\textbf{X}_2^*$ and has the same non-zero eigenvalues as $\frac{1}{n}\textbf{X}_2^*\textbf{X}_2$.
In your context, it is important that the spectral norm of the inverse of the matrix $\ell I_n - \frac{1}{n} X_2^* X_2$ is bounded. If the matrix is not invertible, then this doesn't even make sense. (For another perspective: The spectral norm is the largest eigenvalue, in magnitude, of a symmetric matrix. Imagine diagonalising the matrix, and taking the reciprocals of the eigenvalues in the diagonal matrix. This defines the inverse if it is actually invertible. But if it is not, dividing by zero gives you an unbounded number.)
Boundedness of the operator norm of the matrix $(\ell I_n - \frac{1}{n} X_2^* X_2)^{-1}$ means that the random variable $\frac{1}{n} \mathrm{tr}(\ell I_n - \frac{1}{n} X_2^* X_2)^{-1}$ is also bounded. E.g. If $\lVert (\ell I_n - \frac{1}{n} X_2^* X_2)^{-1} \rVert \leq M$, then $\frac{1}{n} \mathrm{tr} (\ell I_n - \frac{1}{n} X_2^* X_2)^{-1} \leq \frac{1}{n} \sum_{i=1}^n M = M$. A bounded random variable naturally has finite expectation, and so the strong law of large numbers can be applied to deduce that it converges almost surely to its expectation.