I'm in the process of reading this paper, and I came across a peculiar weighted norm (as the authors describe it) on page 7,
\begin{align} ||\mathbf{x}||_{W}^{2}=\mathbf{x}^{\text{T}}W^{\text{T}}W\mathbf{x} \end{align}
but I'm having trouble finding information about by Googling and on this site. Has anyone come across it, or could someone point me toward a resource where I might learn more about it?
I'm only asking because it looks similar to the simpler Mahalanobis distance metric, and the authors use it in a similar way, i.e. modeling the probability of observation noise.
I'm not sure if this may be helpful but note that $$\mathbf{x}^TW^T W\mathbf{x} =(W\mathbf{x)}^TW\mathbf{x} \color{red}{=} (W\mathbf{x}) \cdot (W\mathbf{x}) = ||W\mathbf{x}||^2$$
where we treat the vector with only one component as a scalar (red equals sign).