Show that $ \hat{\theta}_2 = Y_{(n)} - \frac{n}{n+1}$ is unbiased estimators of $θ$.

2k Views Asked by At

Let $Y_1$, $Y_2$, . . . , $Y_n$ denote a random sample from the uniform distribution on the interval $(θ, θ + 1)$. Let $$ \hat{\theta}_2 = Y_{(n)} - \frac{n}{n+1}$$

Show that $\hat{\theta}_2$ is unbiased estimators of $θ$.

I am told that the density of $\hat{\theta}_2 = Y_{(n)}: g_n(y) = n(y-\theta)^{n-1}, \theta \le y \le \theta +1$. My question is how to find these?

Then, the second part of the solution is $E(\hat{\theta}_2) = E(Y_{(n)}) - \frac{n}{n+1} = \theta$...

However, how does this follow from the previous answer?

Also, I try to compare the efficiency of $θ^1$ relative to $θ^2$... I am wondering how to find the $\text Var [\hat{\theta}_2]$ which I am told that $\text Var [\hat{\theta}_2]= V(Y_{(n)})=\frac{n}{(n+2)(n+1)^2}$ I am wondering how they calculate $\text Var [\hat{\theta}_2]$?

$$E(Y_{(n)}^2) = ny^2(y-\theta)^{n-1} = n\left[\left.y^2\frac{(y-\theta)^n}{n} \right|_\theta^{\theta+1} - \frac{2}{n} \int_\theta^{\theta+1} y(y-\theta)^n \,dy\right]$$

$$= \left.(\theta+1)^2 - 2\left(y\frac{(y-\theta)^{n+1}}{n+1} \right|_\theta^{\theta+1} - \int_\theta^{\theta+1} \frac{(y-\theta)^{n+1}}{n+1} dy\right) = (\theta+1)^2 -2 \left(\frac{\theta+1}{n+1} - \left.\frac{(y-\theta)^{n+2}}{(n+1)(n+2)}\right|_\theta^{\theta+1}\right)$$

$$= (\theta+1)^2- 2\frac{\theta +1}{n+1} - \frac{1}{(n+1)(n+2)}$$

I check the Latex for the middle two lines... I could not figure out what went wrong please focus on my answer ...If I use $E(Y_{(n)}^2)$ - $E(Y_{(n)})^2$, based on wolframalpha which gives me the following result ...please go to the link https://www.wolframalpha.com/input/?i=%28x%2B1%29%5E2-2%28%28x%2B1%29%2F%28n%2B1%29%29+%2B+1%2F%28%28n%2B1%29%28n%2B2%29%29+-+%28%28x%2B1%29+-+1%2F%28n%2B1%29%29%5E2 ...which is different from the correct answer $\text Var [\hat{\theta}_2]= V(Y_{(n)})=\frac{n}{(n+2)(n+1)^2}$....Could anyone please check why

2

There are 2 best solutions below

15
On BEST ANSWER

We have $Y_i\sim\mathcal{U}(\theta,\theta+1)$ and CDF of $Y_i$ based on Wikipedia $$ G_{Y_i}(y)=\Pr[Y_i\le y]=\frac{y-\theta}{\theta+1-\theta}=y-\theta. $$ Here, $Y_{(n)}$ is $n$-th order statistics. Therefore, $Y_{(n)}=\max[Y_1,\cdots, Y_n]$. Note that $Y_{(n)}\le y$ equivalence to $Y_i\le y$ for $i=1,2,\cdots,n$. Hence, for $\theta< y<\theta+1$, the fact that $Y_1,Y_2,\cdots, Y_n$ are i.i.d. implies $$ G_{Y_{(n)}}(y)=\Pr[Y_{(n)}\le y]=\Pr[Y_1\le y,Y_2\le y,\cdots, Y_n\le y]=(\Pr[Y_i\le y])^n=\left(y-\theta\right)^{n}. $$ The PDF of $Y_{(n)}$ is $$ g_{Y_{(n)}}(y)=\frac{d}{dy}G_{Y_{(n)}}(y)=\frac{d}{dy}(y-\theta)^n=n(y-\theta)^{n-1}. $$ The expected value of $Y_{(n)}$ is $$ \begin{align} \text{E}\left[Y_{(n)}\right]&=\int_{y=\theta}^{\theta+1}yg_{Y_{(n)}}(y)\ dy\\ &=\int_{y=\theta}^{\theta+1}yn(y-\theta)^{n-1}\ dy\\ &=n\int_{y=\theta}^{\theta+1}y(y-\theta)^{n-1}\ dy. \end{align} $$ Integral above can be solved by using IBP or integration by reduction formula (you may refer here). $$ \begin{align} \text{E}\left[Y_{(n)}\right]&=n\left[\frac{y(y-\theta)^n}{n+1}-\frac{\theta(y-\theta)^n}{n(n+1)}\right]_{y=\theta}^{\theta+1}\\ &=\frac{n(\theta+1)}{n+1}+\frac{\theta}{n+1}\\ &=\theta+\frac{n}{n+1}. \end{align} $$


ADDENDUM :

Using IBP, let $u=y\Rightarrow du=dy$ and $dv=(y-\theta)^{n-1}\ dy\Rightarrow v=\dfrac{(y-\theta)^n}{n}$. Hence $$ \begin{align} n\int_{y=\theta}^{\theta+1}y(y-\theta)^{n-1}\ dy&=n\left[\left.\dfrac{y(y-\theta)^n}{n}\right|_{y=\theta}^{\theta+1}-\int_{y=\theta}^{\theta+1}\dfrac{(y-\theta)^n}{n} dy\right]\\ &=(\theta+1)-\int_{y=\theta}^{\theta+1}(y-\theta)^{n}\ dy\\ &=(\theta+1)-\left.\dfrac{(y-\theta)^{n+1}}{n+1}\right|_{y=\theta}^{\theta+1}\\ &=(\theta+1)-\dfrac{1}{n+1} \end{align} $$

Variance can be evaluated as follows $$ \text{Var}\left[\hat{\theta}_{2}\right]=\text{Var}\left[Y_{(n)}-\frac{n}{n+1}\right]=\text{Var}\left[Y_{(n)}\right]=\text{E}\left[Y_{(n)}^2\right]-\left(\text{E}\left[Y_{(n)}\right]\right)^2. $$ Since the term $\dfrac{n}{n+1}$ is a constant, then you can ignore it. Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged. You may refer here to learn properties of variance.


Thus $$ \begin{align} \text{E}\left[\hat{\theta}_{2}\right]&=\text{E}\left[Y_{(n)}-\frac{n}{n+1}\right]\\ &=\text{E}\left[Y_{(n)}\right]-\frac{n}{n+1}\\ &=\theta+\frac{n}{n+1}-\frac{n}{n+1}\\ &=\theta. \end{align} $$ Since $\text{E}\left[\hat{\theta}_{2}\right]=\theta$, we can conclude that $\hat{\theta}_{2}$ is unbiased estimator of $\theta$.

$$\\$$


$$\Large\color{blue}{\text{# }\mathbb{Q.E.D.}\text{ #}}$$

0
On

Note that $Y_{(n)}=\max\{Y_1,Y_2,...,Y_n\}$. Given i.i.d. observations the cdf of $Y_{(n)}$ is given by $$ G_{Y_{(n)}}(y)=\mathsf{P}(\max\{Y_1,Y_2,...,Y_n\}\le y)=[G_{Y}(y)]^n=(y-\theta)^n, $$ and the density of $Y_{(n)}$ is $$ g_{Y_{(n)}}(y)=\frac{d}{dy}[G_{Y}(y)]^n=n(y-\theta)^{n-1}. $$

Thus, \begin{align} \mathsf{E}[\hat{\theta}_2]& = \int_\theta^{\theta+1}yn(y-\theta)^{n-1}dy-\frac{n}{n+1} \\ &=\frac{\theta(n+1)+n}{n+1}-\frac{n}{n+1}=\theta. \end{align}