Derive the maximum likelihood equation when only $I[X_1>5], I[X_2 > 5], ... , I[X_n > 5]$ are observed.

110 Views Asked by At

I'm facing a problem that I'm not quite sure how to interpret or solve.

Let $X_1, X_2,..., X_n$ be i.i.d. exp($\beta$) random variables.

Suppose that the only observed information I have is $I[X_1>5], I[X_2 > 5], ... , I[X_n > 5]$.

From this, how can I find a maximum likelihood estimator for $\beta$ ?

I know how to derive it in the regular case, taking the derivative of the log-likelihood fuction, but I'm not sure I know how to interpret this let alone solve for the estimator. Any help would be greatly appreciated!

2

There are 2 best solutions below

0
On

Hint: With $a\equiv|\{i:x_i\leq 5\}|$ and $b\equiv n-a$, define $$ L(\beta)=\prod_{i:x_i\leq 5}[1-\exp(-5/\beta)]\prod_{i:x_i>5}\exp(-5/\beta)=[1-\exp(-5/\beta)]^a\exp(-5b/\beta). $$ and maximize $L(\beta)$. As usual, $l(\beta)\equiv\log[L(\beta)]$ is more convenient to work with.

0
On

Given the use of $\beta$, I assume the parameterization of the exponential distribution in terms of mean.

As you should know, the MLE is the parameter that maximizes $P(\text{obs} | \text{parameter})$.

As the observations are i.i.d, we can simply multiply the probabilities of the individual observations together to get the overall likelihood function. As these are binary variables, this is actually a thinly disguised binomial distribution. The overall likelihood is then just $P \propto p^k (1-p)^{n-k}$. The CDF of the exponential distribution gives $p = P(X_i \gt 5 | \beta) = \exp(-5/\beta)$.

Choosing the $\beta$ that maximizes $P$ can be done by choosing the $p$ that does.

For $k$ successes (i.e. observations > 5) in $n$ trials, the MLE for a binomial is a standard exercise, with the result that $p = k/n$.

Letting $p = \exp(-5/\beta) = k/n$ gives $\beta = -5/log(k/n)$.

If all observations are $\gt 5$ (i.e. $k=n$), this has no maximum over the real numbers; in a loose sense, $\beta=\infty$ would maximizes this. A more sensible answer would require going beyond the MLE framework with regularization or a Bayesian approach.