From probability, we have $$\Sigma_x = \frac{\left(HX\right)^T\left(HX\right)}{N}$$ Where $\Sigma_X$ is the covariance matrix for a data matrix $X$. $X$ is constructed by putting all the datapoints in rows, so the first row is the first datapoint, the second row is the second collected data point and so forth. $H$ is the centering matrix $I - \frac{11^T}{N}$
Simplifying $\Sigma_X$, $$\frac{\left(HX\right)^T\left(HX\right)}{N} = \frac{X^TH^T H X }{N} = \frac{X^T H ^2 X}{N} = \frac{X^THX}{N}$$
This both makes sense and doesn't make sense. At first glance, centering a matrix once and then doing so again does nothing the second time around. Matrix $H$ is idempotent. As a result when we lose the squared above the H, I'm okay with that.
However digging a little deeper, it doesn't make sense that only one of my data matrices is centered when I should have two centered data matrices to properly compute covariance.
Going to the 1D case, $$E[(X - \mu_{x})^2] \neq E[X(X - \mu_{x})]$$
Here it doesn't make sense for only one of my $X$ random variables to be centered while the other one isn't.
Let $Y=X-\mathbb{E}\{X\}=X-\mu_X$ be the centered version of $X$. Your question is that you don't "see" why is the case that $$\mathrm{Var}\{Y\}=\mathrm{Cov}\{X,Y\}=\mathrm{Var}\{X\}$$
You have realized by now that, with $H$ being idempotent, this is actually the case. Noting that $\mathbb{E}\{Y\}=\mu_Y=0$, we also can see that \begin{align} \mathrm{Var}\{Y\}&=\mathbb{E}\{(Y-\mu_Y)^2\}=\mathbb{E}\{Y^2\}=\mathbb{E}\{(X-\mu_X)^2\}=\mathrm{Var}\{X\} \end{align} and furthermore, \begin{align} \mathbb{E}\big[(X - \mu_{x})^2\big] &= \mathbb{E}\big[X^2 - 2X\mu_{x}+\mu_x^2\big]\\ &= \mathbb{E}\big[X^2\big] - 2\mathbb{E}\big[X\big]\mu_{x}+\mu_x^2\\ &= \mathbb{E}\big[X^2\big] - 2\mu_{x}^2+\mu_x^2\\ &= \mathbb{E}\big[X^2\big] - \mu_x^2\\ &= \mathbb{E}\big[X^2\big]-\mathbb{E}\big[X\big]\mu_x\\ &= \mathbb{E}\big[X^2-X\mu_x\big]\\ &= \mathbb{E}\big[X(X - \mu_{x})\big]\\ &= \mathbb{E}\big[XY\big]\\ &= \mathbb{E}\big[XY\big]-\mu_X\mu_Y\\ &=\mathrm{Cov}\{X,Y\} \end{align}
This can also be understood by the fact that $\mathrm{Cov}\{U,V\}=\mathrm{Cov}\{U+a,V+b\}$, with $a,b$ constants, for all random variables $U,V$. If we use this property with $U=X$, $a=0$, $V=X$ and $b=\mu_X$, we obtain again the same result.
Why am I relating the result to a covariance? To attempt an intuitive explanation beyond these analytical proofs.
Recall that the covariance of $U$ and $V$ is a measure of the joint variability of these two variables, i.e., how they change with respect to one another. Shifting one or both them by a constant term does not change the behavior of this (relative) joint variability.
Now, $\mathrm{Var}\{U\}=\mathrm{Cov}\{U,U\}$, i.e. variance is a measure of variability of a random variable against itself, a dispersion with respect to its mean (in quadratic terms), but is also a covariance. Hence, it does not change by shifting one or both copies of $U$ in $\mathrm{Cov}\{U,U\}$ by any constant, in particular its mean.