I know that the variance measures the dispersion of an estimator around its mean i.e. $\sigma^2 = E[(X - \mu)^2]$ (the second central moment about the mean).
But I'm not getting the meaning of the definition below:
The mean squared error measures the dispersion around the true value of the parameter being estimated. If the estimator is unbiased then both are identical.
I know that both variance and MSE are related to second moment. But I'm not getting the actual difference between them. Can anybody explain to me the basic difference between them in simple language?
The variance measures how far a set of numbers is spread out whereas the MSE measures the average of the squares of the "errors", that is, the difference between the estimator and what is estimated. The MSE of an estimator $\hat{\theta}$ of an unknown parameter $\theta$ is defined as $E[(\hat{\theta}-\theta)^2]$.
The MSE is the second moment (about the origin) of the error, that's why it includes both the variance of the estimator and its bias (the bias being $E(\hat{\theta})-\theta$).
In other words, the variance just measures the dispersion of the values; the MSE indicates how different the values of the estimator and the actual values of the parameters are. The MSE is a comparison of the estimator and the true parameter, as it were. That's the difference.
Edit: I'll use your example: Suppose we have a bull's-eye, the mean of the estimator is the target. The variance measures how far the arrows are from the target. Now suppose we have another bull's-eye, and this time the target is the true parameter. The MSE measures how far the arrows (estimates) are from the target. In the first case, we just measure the dispersion of the values of the estimator with respect to its mean. In the second case, we measure the error we make when estimating the parameter, i.e. we are comparing it with the true parameter (that's why we want estimators with variance and MSE as small as possible). Don't confuse the mean of an estimator with the true value of a parameter.