So we're currently studying Estimators and we just proved Cramér-Rao's inequality and that when it is an equality, then whatever estimator we have is a unique MVUE.
All of this to me just sounds like fancy jargon. I don't understand:
the point of an estimator
why given an estimator $\hat{\tau}$ for $\tau$, to prove its unbiased we need to show that $E[\hat{\tau}] = \tau$
what the practical applications of an estimator are, and why knowing that it's biased or unbiased or minimum helps us
My professors basically just try to delve into the theory with no explanation of why all of this is important on a larger scale. Any help would be appreciated!
1) An estimator is basically an attempt to calculate the value of a parameter based on the data. For example, an estimator of the mean is a function that attempts to use the data to discover the mean of a random variable. For example, one estimator of the mean is a sample mean, given by $$\hat{x} = \frac{1}{n} \sum x_n$$ where the $x_n$ are random variables and $n$ is the number of them. Estimators are usually marked with a little hat over the variable of interest, such as $\hat{\theta}$.
2) That is just the definition of unbiased. What the formula is asking whether the estimator $\hat{\theta}$ we created has expectation of the actual parameter $\theta$. This is to show that "on average" (which is expectation), our estimator is equivalent to the value of the actual parameter. This is quite a useful property to have, although sometimes biased estimators because of their strength of other properties.
3) Estimators are practically one of the most important tasks in statistics. We need it to predict values. For why we care about biased, unbiased, etc., we need some way to describe properties of estimators so as to say when one estimator is better than another. MVUE is an example of an estimator that is optimal with regards to some criteria, and thus "best" in this criteria's sense.