Find unbiased estimator for $\theta$ when PDF of X is $e^{\theta-X}$?

4.2k Views Asked by At

I know the MLE for $\theta$ is $min{[X_i]}$ but I can't check if that's unbiased because I don't know how to solve (U=$min{[X_i]}$ here) $\int_\theta^\infty u*f_U(u)du$ = $\int_\theta^\infty ue^{\theta-u}(1-e^{\theta-u}+e^\theta)^{n-1}du$

1

There are 1 best solutions below

0
On BEST ANSWER

First of all, the correct PDF should be specified: $$f_X(x) = e^{\theta-x} \mathbb 1(x > \theta) = \begin{cases} e^{\theta - x}, & x > \theta \\ 0, & x \le \theta. \end{cases}$$ This is a member of the location-scale family of exponential distributions with location parameter $\theta$ and scale parameter $1$; hence it has mean $\operatorname{E}[X] = \theta + 1$ and variance $\operatorname{Var}[X] = 1$.

Assuming (correctly) that the MLE of a random IID sample $X_1, \ldots, X_n$ drawn from the above distribution is $$\hat \theta = \min X_i = X_{(1)},$$ we are then tasked to determine if $\hat\theta$ is unbiased; and if not, to find an unbiased estimator of $\theta$.

To this end, it is immediately obvious that $\hat\theta$ cannot be unbiased: for it is guaranteed that $\min X_i > \theta$ by the definition of the PDF. Hence its expectation is also necessarily strictly greater than $\theta$. To compute the bias, we simply need to determine the density of the first order statistic; i.e., consider $$F_{X_{(1)}}(x) = \Pr[X_{(1)} \le x] = 1 - \Pr[X_{(1)} > x] = 1 - \Pr[X_1, X_2, \ldots, X_n > x],$$ since the minimum of the observations is greater than some fixed $x$ if and only if each of the observations is greater than $x$. But since the observations are independent, we have $$F_{X_{(1)}}(x) = 1 - \prod_{i=1}^n \Pr[X_i > x] = 1 - \left( e^{\theta-x} \mathbb 1(x > \theta) \right)^n = (1 - e^{n(\theta - x)}) \mathbb 1(x > \theta).$$ Thus the density is $$f_{X_{(1)}}(x) = ne^{n(\theta-x)} \mathbb 1(x > \theta),$$ and the expectation is $$\operatorname{E}[\hat\theta] = \operatorname{E}[X_{(1)}] = \int_{x=\theta}^\infty n x e^{n(\theta-x)} \, dx = \theta + \frac{1}{n} > \theta,$$ confirming our earlier reasoning. But now we have quantified the bias, so to create an unbiased estimator, we simply write $$\tilde \theta = \hat \theta - \frac{1}{n},$$ and it is obvious that $\operatorname{E}[\tilde \theta] = \theta$ since $1/n$ is constant.