Apologies for my vague question, but otherwise I can find the source myself. I read about this idea a few months ago. I believe it comes from a not-too-old thesis (PhD perhaps), but I can't remember the author or even the year or the institution.
The main idea I remember is something like in the figure here:
The thesis is basically deriving a new statistical testing method. And what is shown in the figure is an illustrative example for two population comparison. The arugment is roughly: By finding the data eigenspace (the diagonal line in the figure) for comparing the two populations (those ugly dots), the estimated distributions would have smaller variability, which naturally increases the satistical significance when we try to distinguish their means. (You can see the two bell curves along the horizontal axis are broader than those after transformation, those sitting on the diagonal line.)
Hopefully my description would be enough for somebody who is familiar enough with the field. I'm not really demanding the exact source. It would be enough for me if anybody could point me towards anything similar; I can check the relevant literature myself. But for now frankly I don't even know which key words to put into google scholar.
