Suppose you have an $n\times p$ tall matrix $\mathbf{X}$, where $n \gg p$. I need a quick way to compute the diagonal entries of $(\mathbf{X}^\top \mathbf{X})^{-1}$ for some confidence intervals of regression coefficients. Since $\mathbf{X}^\top \mathbf{X}$ is positive definite given linearly independent columns, my initial thought was to do Cholesky decomposition, but I don't know where to take it from there. An iterative method is also fine.
Any help would be appreciated. Thanks!
Once you have a Cholesky decomposition $ X^T X = L L^T$ you have $$(X^T X)^{-1} = (L L^T)^{-1} = (L^{-1})^T L^{-1}$$
For a matrix $A,$ the $i$-th column on $A$ is given by $Ae_i$ and the $i$-th diagonal entry of a square matrix $A$ is thus given by $e_i^T A e_i.$
Therefore, the $i$-th diagonal entry of $(X^T X)^{-1}$ is given by
$$ e_i^T (X^T X)^{-1} e_i = e_i^T (L^{-1})^T L^{-1} e_i =(L^{-1}e_i)^T L^{-1} e_i = \| L^{-1} e_i \|^2 $$
That is, the $i$-th diagonal entry of $(X^T X)^{-1}$ is the squared norm of the $i$-th column of $L^{-1}.$
Further, note that for issues of numerical stability and performance, you should compute $L^{-1} e_i$ by solving $Lx_i = e_i$ via back-substitution rather than other methods of inverting $L.$