I have been looking for implications and why this would have ever been defined at all?
The best answer I found out there so far is that for one single data, the variance would be zero, which (makes no sense? - makes loads of sense to me) while it's quasivariance would be indetermined.
Does anybody have more ideas about it?
Comment: The term quasi-variance is not widely used. According to what I can immediately find on Google, quasi-variance seems to refer to $\frac{1}{n-1}\sum_{i=1}^n (X_i - \bar X)^2$ as contrasted with the "variance" $\frac{1}{n}\sum_{i=1}^n (X_i - \bar X)^2.$
In common usage, the first expression is usually referred to as the sample variance $S^2$ of data $X_1, \dots, X_n.$ There are various reasons for dividing by $n - 1$ instead of $n.$ One of them is that if $S^2 = \frac{1}{n-1}\sum_{i=1}^n (X_i - \bar X)^2,$ then the expected value $E(S^2) = \sigma^2,$ the variance of the population from which the sample was randomly chosen. If you have only a single observation, you have no basis for judging the dispersion of the population from which it was drawn.
Addendum: I defer to @L.V.Rao's mention of Firth's paper on 'quasivariances'. However, from the wording of the original Question I'm not sure this is what OP had in mind.