In an exercise they asked me: "Why could we use the following correction factor? $\text{varianceX} = \frac{n-1}{n}*\text{varianceY}$
What I said was basically, because the unbiased sample variance have a factor of $\frac{n}{n-1}$, we could multiply the variance by $\frac{n-1}{n}$ to cancel the correction of Bessel and get the biased variance.
But what would be the utility of calculating a biased variance?, maybe I'm wrong about something...
Well, after reading a lot on wikipedia I think this is the answer:
In calculating the expected value of the sample variance a factor equal to $\frac{n-1}{n}$ is obtained, which underestimates the expected variance so usually when calculating the sample variance we multiply it by the factor $\frac{n}{n-1}$ which is generally known as the variance unbiased sample variance or corrected sample variance.
The problem with this is that to correct the bias produces a large MSE, so one can choose a scaling factor that behave better than the variance of the corrected sample. This is always scaling down, choosing a 'a' greater than n-1, such that:
$S^2_a = \frac{n-1}{a}S^2_{n-1}$
In my case $a = n$.