I'm a master's student in mathematics taking my first actual probability and stats class, and I have the misfortune of having a professor who is sloppy (his homeworks are riddled with typos which sometimes change the question) and unresponsive.
One question on our exam review sheet is, I think, in this camp. I'm trying to figure out what the question is trying to ask. Because my background in stats is weak at best, I can't quite guess with confidence what this is supposed to be.
Let $X_1, X_2, \cdots, X_n$ be a random sample of size $n$. Show that $\sum x_i^2$ is a biased estimator of $\mu^2$.
So the reason I'm confused is because it doesn't seem to make sense that $\sum x_i^2$ would be a good estimator of $\mu^2$. You would expect it to be $\frac{1}{n}\sum x_i^2$, right?
If my thinking is correct, then the process would be to show that the expectation value of the estimator is not equal to $\mu^2$, correct?
I am also confused whether the $x_i$ in the sum refers to the sample. I suspect it must, or the question wouldn't make much sense.
In which case my proof should go something like
$\mathbb{E}(\frac{1}{n} \sum x_i^2) = \frac{1}{n} \sum \mathbb{E}(x_i^2) = \frac{1}{n}\sum_i \sum_x x^2 p(x) = \frac{1}{n} \sum _i \mu^2 = \mu^2.$
Since we are supposed to show the estimation is biased, I guess this can't be right. But this also suggests to me I don't really understand the sample mean very well.
If someone could help clarify my points of confusion, I would appreciate it.
$\mu = \sum x p(x)$ so $\mu^2 = (\sum x p(x))^2 \neq (\sum x^2 p(x))$. So the last equalities of your proof is wrong!
And
$\mathbb{E}(X_i^2)= \mathbb{E}(X_i)^2 +(\mathbb{E}(X_i^2) -\mathbb{E}(X_i)^2)$
$\mu^2 = \mathbb{E}(X_i)^2$ and $\sigma^2 = (\mathbb{E}(X_i^2) -\mathbb{E}(X_i)^2) $(thanks to König's formula).
so $\mathbb{E}(X_i^2) = \mu^2 + \sigma^2 $
Since the start of the proof is right, we finish with $\mathbb{E}(\sum \frac{1}{n} x_i^{2}) = \mu^2 + \sigma^2 \neq \mu^2$.