Maximum Likelihood Estimator where x>= theta

2.1k Views Asked by At

Let X = X1, X2, . . . , Xn be i.i.d. random variables from a population with a density if x ≥ θ f(x;θ) = 2θ^2/x^3, if x ≥ θ, 0 elsewhere. What is the Maximum Likelihood Estimator of θ? Is it unbiased?

So far calculated the log likelihood to be nlog2 +2nlogθ - 3nlogx. Deriving this and setting to zero does not make sense to me?

Thanks for any assistance

1

There are 1 best solutions below

0
On

Using proper notation would help make the computation more understandable.

The density can be expressed with the use of an indicator function: $$f(x \mid \theta) = \frac{2\theta^2}{x^3} \mathbb{1}(x \ge \theta),$$ where $$\mathbb{1}(x \ge \theta) = \begin{cases}1, & x \ge \theta \\ 0, & x < \theta. \end{cases}$$ Now the joint density of the sample $\boldsymbol x = (x_1, x_2, \ldots, x_n)$ is simply $$f(\boldsymbol x \mid \theta) = \prod_{i=1}^n f(x_i \mid \theta) = (2\theta^2)^n \prod_{i=1}^n x_i^{-3} \mathbb{1}(x_i \ge \theta).$$ But observe that the product $$\prod_{i=1}^n \mathbb{1}(x_i \ge \theta) = \mathbb{1}(\min_i x_i \ge \theta) = \mathbb{1}(x_{(1)} \ge \theta);$$ that is to say, the product of the indicator functions is $1$ if and only if each observation $x_i$ is at least $\theta$, or equivalently, the smallest $x_i$ is at least $\theta$; this is more compactly written as the first order statistic $x_{(1)} = \min_i x_i$. Therefore, a likelihood function for $\theta$ given the sample $\boldsymbol x$ is given by $$L(\theta \mid \boldsymbol x) = \theta^{2n} \mathbb{1}(x_{(1)} \ge \theta).$$ We need not retain the factors $2^n$ nor $\prod x_i^{-3}$, because these are constant with respect to $\theta$ for a given fixed set of observations and sample size $n$.

The only remaining thing to do is to determine, for a fixed set of observations $\boldsymbol x$, what value of $\theta$ maximizes $L$. Clearly, as $\theta$ increases, the factor $\theta^{2n}$ also increases (since $n$ is a positive integer). But its value is strictly limited by the requirement that $\theta$ must not exceed the smallest observation $x_{(1)}$; otherwise, the value of the indicator function is zero and the likelihood of having observed our sample is also zero.

Now, suppose $\hat\theta$ is the maximum likelihood estimator based on the above reasoning. Determine the distribution of this estimator for a fixed but unknown true value of the parameter $\theta$, and compute its expected value as a function of $\theta$. If $\mathrm{E}[\hat\theta] = \theta$, then $\hat \theta$ is unbiased. But we can already intuitively sense that $\hat\theta$ is necessarily biased, because our estimator can never be an underestimate of the true value of the parameter from which the sample was drawn: for instance, if our observed sample was $\boldsymbol x = (3, 2, 3, 5, 10)$, we immediately know without doing any computation that $\theta \le \hat\theta = 2$. We can never obtain a $\hat\theta$ that is smaller than the true parameter $\theta$, so its expectation cannot equal $\theta$.