I just started learning statistical inference course and I am doing this topic called consistency. I am not able to understand this line 'Consistency is a property of a sequence of estimators rather than one point estimator" If anyone could explain by giving an example that would be helpful.
2026-03-26 20:41:18.1774557678
Reasoning behind the statement 'Consistency is a property of a sequence of estimators rather than one point estimator"
44 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
Typically we consider drawing independent samples from some fixed parametric distribution and try to estimate one of the parameters. As the sample size grows, we hope our estimator converges to the true value of the parameter. This is what consistency is about: a family of estimators $\widehat \theta_n$ for the parameter $\theta_0$ is consistent if $\widehat \theta_n \to \theta_0$ as $n \to \infty$. (The definition uses convergence in probability, but this is a technicality.)
Consistency stands in contrast to unbiasedness, which says $\mathbb{E}[\widehat \theta_n]=\theta_0$. In practice unbiasedness often implies consistency (because if a sequence of random variables with a fixed expected value converges to any deterministic value, it can only be the expected value), but this isn't necessarily the case, as one can see by considering estimators that simply discard most of the information they're given.
Two classical examples of estimators like this are the sample mean (which is both consistent and unbiased) and the sample standard deviation (which is consistent but in general biased).