Hi guys needed some help with these two questions. I've tried to go over my lecture notes, but I'm still struggling to get my head around the topic. I hope I've edited the questions properly using MathJax.
Question 1: It is suggested that the value k$\overline X $ where $\overline X = n^{-1}$$\sum_{i=1}^n X_i$ be used to estimate the parameter $\theta$ of a sequence $X_i$, i = 1,...,n of i.i.d. random variables uniformly distributed in the interval (0, $\theta$). For what value of k is this an unbiased estimator?
Solution: Was not able to start question.
Question 2: True or False: We may not always prefer an unbiased estimator for a parameter $\theta$ $\varepsilon$ Θ, but will generally be easier to find an optimal unbiased estimator, in terms of minimising mean squared error, than to find an optimal estimator from within the class of all possible estimators for $\theta$
My Solution: From what I have understood from the lecture notes, my answer is true as we prefer to minimise MSE.
Appreciate your help!
If $X \sim \mathsf{Unif}(0,\theta),$ then $E(X) = \theta/2.$ Then
$$E(2\bar X) = \frac 2n E\left(\sum_{i=1}^n X_i\right) = \frac 2n\sum_{i=1}^n E(X_i) = \frac 2n \left(\frac \theta 2 + \frac \theta 2 + \cdots + \frac \theta 2\right) = \frac 2 n\left(n\frac \theta 2\right) = \theta,$$ where the fourth member of the equation has $n$ terms inside the parentheses. Thus $E(2\bar X) = \theta$ and $\hat \theta = 2\bar X$ is an unbiased estimator of $\theta.$