In machine learning, we often come across "centering matrices" of the form:
$H_n := I_n - \frac{1}{n}1_n 1_n^{T}$, where $1_n:=[1,1,...1]^{T}$ is the column vector of all one's. I'd like to calculate (not numerically calculate, but have an explicit formula or expression for) the Moore Penrose pseudoinverse of $H_n$. Helps appreciated!!
Let $e=(1,...,1)^T$, and let $v_2,...,v_n$ be an orthonormal basis for $\{e\}^\bot$. Then with $v_1 = {1 \over \sqrt{n}} e$, we have $H_nv_1 = 0$ and $H_n v_k = v_k$ for $k > 1$. Let $V$ be the matrix whose columns are $v_k$, then $H_n = V \Lambda V^T$, where $\Lambda = \operatorname{diag}(0,1,\cdots,1)$.
It follows from the SVD that $H_n^\dagger = V \Lambda^\dagger V^T = V \Lambda V^T = H_n$.