In the book Matrix Analysis and Applied Linear Algebra, the author describes the coefficient of linear correlation as $$\frac{(x-\mu_xe)^T(y-\mu_ye)}{||x-\mu_xe||\cdot||y-\mu_ye||}$$ where $x,y\in \mathbb{R}^n, e=(1,1,...,1)^T, \mu_x=\frac{e^Tx}{n}, \mu_y=\frac{e^Ty}{n}$.
Then the author says correlation coefficient is zero if and only if x and y are orthogonal. This seems incorrect to me unless $\mu_x=0$ and $\mu_y=0$. I looked through the errata of the book online but do not find any mention of this. Am I missing something here?
If you take $x^T=(1,0)$ and $y^T=(0,1)$ you see that although they are orthogonal, they have not the correlation coefficient zeroed. So this is actually a counterexample (only for the sufficiency) of the claim you reported. What need to be orthogonal are $x-\mu_x e$ and $y-\mu_y e$.
What written above tries to answer referring to your formula, but consider that the definition of the (sampling) correlation coefficient involve the mean value of the $x$ not between its components, over all the observation of the random vector $x$ you have!
See wikipedia for more details: https://en.m.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient