Consider a random column vector $\mathbf{x}$, of dimension $m$. That is, it is a random vector, composed of $m$ random variables. The PDF of the random vector $\mathbf{x}$ is thus the joint-PDF of its $m$ random variable components.
Let us assume that one wishes to whiten the random variables of this vector. That is, you want to de-correlate them, and then scale their variances to be unity. One way to do this is to compute a whitening matrix $\mathbf{V_{\mathrm{mxm}}}$, and one way to compute the whitening matrix is by:
$$ \mathbf{V_{\mathrm{mxm}}} = \mathbf{E} \mathbf{D^{\mathrm{-\frac{1}{2}}}}\mathbf{E^{T}} $$
where $\mathbf{E}$ is the column wise eigen-vector matrix, and $\mathbf{D}$ its corresponding eigen-value matirx, coming from the eigenvector decomposition of x's covariance matrix, $\mathbf{R_{\mathrm{xx}}} = \mathbf{E}\mathbf{D}\mathbf{E^T}$. The $\mathbf{D^{-\frac{1}{2}}}$ implies the " inverse matrix square root". In fact there are many other types of whitening matricies that can be constructed.
My question is, to my knowledge, any construction of a whitening matrix needs to have an inverse operation within it. For me, this is fine if my dimensionality is 'small', but I hesitate to use this method for larger dimensions of $m$.
What other methods of computing a whitening matrix might exist that do not involve the computation of inverses?
Much obliged.
Let $X$ be the original random with covariance matrix $\Sigma$. Since $\Sigma$ is Hermitian symmetric, we can do an eigen-value decomposition with an orthonormal basis:
$\Sigma = E \Lambda E^T$
Now, the whitening matrix is $W = (E \Lambda^{1/2})^{-1}$ since covariance matrix of $WX$ is $I$. The good thing about having an orthonormal basis is that $E^{-1} = E^T$. Hence, $W = \Lambda^{-1/2} E^{T}$. Hence, you only need to take inverse of the diagonal matrix, which is a matrix with inverse of the original diagonal entries. This is not computationally expensive at all.