Traditional linear regression (statistical technique to get the conditional mean) has a measure often labeled $R^{2}$ or coefficient of determination. It is equal to correlation between the model values and actual values squared $\rho^{2}(Y, \hat{Y})$.
A method that will give potentially totally different values from $\rho^{2}$ (pressumably because it needs the error sample mean to be zero) is :$$R^{2} = 1 - \frac{V(Y - \hat{Y})}{V(Y)}$$. The above has an intuitive interpretation as explained variance, though it seems lacking probabilistic basis.
Flawed as $R^{2}$ already seems to be for linear regression it is generally a big no no for any non-linear models.
The question is -- why use this odd $R^{2}$ measure at all? Why not just use correlation $\rho(Y, \hat{Y})$. It still has the same interpretation where 0 would mean a rubbish model and 1 a perfect one, can apply it irrespective of how $\hat{Y}$ was generated (using linear or non-linear models), is simple and probabilisticly groundeed (as covariance scaled).
I believe that the answer is mainly tradition. There are numerous attempts to generalize the notion of $R^2$ to non-linear models, especially may-be for the logistic model. I think there are about $8$ (at least) different approaches of defining $ R^2 $ in logistic regression. There is hardly any objective need in $ R^2$ in linear model, and surely this measure is irrelevant for model where the orthogonal decomposition of $ SST = SSRes + SSReg$ does no hold, however the sole fact that this measure is so popular drives this kind of research.
Regarding the measure itself, it is defined as $$ R ^ 2 = 1 - \frac{S ^2_{\epsilon|X}}{S ^ 2_Y}, $$
where the adjusted $R ^ 2$ takes the unbiased estimator of $ \sigma ^2 _{\epsilon |X}$ and the non-adjusted the biased one. Anyway, asymptotically it does not matter as it converges (weakly) to $$ 1 - \frac{\sigma ^2 _{\epsilon|X}}{\sigma ^2_Y} . $$
Whether you like this parameter or not, it can be a useful measure of the proportion of "explained" variance as $\sigma ^ 2_{\epsilon|X} / \sigma ^2_{Y}$ is proportion of "unexplained" variance of $Y$. Namely, the proportion of variance that can be attributed to a random noise or other factors.
To sum it up - IMHO, there is doubt about the problematic properties of $ R ^ 2 $ and your proposed measure is not worse than $ R ^ 2$. However traditionally $ R ^ 2 $ is very common and intuitive statistic and with all its flaws, for linear models with an intercept term, it has theoretical meaning and thus may still be a useful statistic for basic model a assessment.