MLE Estimator for paratmer of piece-wise uniform distribution

275 Views Asked by At

My question relates to this thread here: How to find MLE of this piecewise pdf? On answer submitted by @StubbornAtom and @Riccardo, the expression used, doesn't that require pdf to be continuous, Is it valid expression in this case ?

Also for such an estimator how can we find variance of estimator? As it cannot be calculated by fisher information or the Delta method since the observation x are not involved in likelihood function.

1

There are 1 best solutions below

0
On

You need to take a step back and think about sufficiency, before you can consider estimation.

Recall that, loosely speaking, a statistic is sufficient for a parameter $\theta$ if it does not discard any information about the parameter that is present in the sample. The joint density is $$f(\boldsymbol x) = \theta^{n_1} (1-\theta)^{n_2},$$ where $$n_1 = \sum_{i=1}^n \mathbb 1 (0 \le x_i \le 1), \\ n_2 = n - n_1 = \sum_{i=1}^n \mathbb 1 (1 < x_i \le 2).$$ So we note that the likelihood $\mathcal L(\theta \mid \boldsymbol x)$ is a function of the sample $\boldsymbol x$, as it must be. Just because we wrote it in terms of functions $n_1, n_2$ of $\boldsymbol x$, does not mean it no longer depends on the sample. This is your first misunderstanding.

Then, by the factorization theorem, the joint density may be written as $$(\boldsymbol x) = h(\boldsymbol x) g(T(\boldsymbol x) \mid \theta)$$ where $$T(\boldsymbol x) = n_1, \\ g(T \mid \theta) = \theta^T (1-\theta)^{n - T}, \\ h(\boldsymbol x) = 1.$$ Thus $n_1$ is a sufficient statistic for $\theta$. This means that any estimator for $\theta$ that does not discard information about $\theta$ can be written as a function of the sufficient statistic $n_1$, and this property is satisfied by the MLE $\hat \theta = n_1/n$.

Second, MLEs can be derived for discrete distributions, as well as families of distributions for which the parameter space is itself discrete. If you are not aware of this, you need to review statistical estimation, as this is a basic concept.

The variance of $\hat \theta$ is straightforward, as $n_1$ is binomially distributed with parameters $n$ and $\theta$. This follows from the fact that $\Pr[0 \le X_i \le 1] = \theta$ for each $i = 1, 2, \ldots, n$. So $$\operatorname{Var}[\hat \theta] = \frac{\operatorname{Var}[n_1]}{n^2} = \frac{\theta(1-\theta)}{n}.$$