Maximize $\sum_{i=1}^n\ln\left(\frac{2x_i}{\theta}\mathbf{1}_{[0,\theta)}(x_i)+\frac{2(1-x_i)}{1-\theta}\mathbf{1}_{[\theta,1]}(x_i)\right)$

168 Views Asked by At

I'm trying to solve the following problem:

Consider a sample of $n$ i.i.d observations drawn from a distribution characterized by the density function $$f_{\theta}(x)= \begin{cases}{\frac{2 x}{\theta}} & {\text { if } x \in [0, \theta)} \\ {\frac{2(1-x)}{1-\theta}} & {\text { if } x \in[\theta, 1]} \\ {0} & {\text { otherwise }}\end{cases}$$ where the parameter $\theta \in (0,1)$. Find the maximum likelihood estimator of $\theta$.


My attempt:

Let $X=(X_1, \ldots, X_n)$. Then the log-likelihood function is $$l(\theta;x) = \sum_{i=1}^n \ln \left ( \frac{2 x_i}{\theta} \mathbf{1}_{[0, \theta)} (x_i) + \frac{2(1- x_i)}{1-\theta} \mathbf{1}_{[\theta,1]} (x_i) \right)$$

Let $(X_{(1)}, \ldots, X_{(n)})$ be the order statistics. Then $$l(\theta;x) = \sum_{i=1}^n \ln \left ( \frac{2 x_{(i)}}{\theta} \mathbf{1}_{[0, \theta)} (x_{(i)}) + \frac{2(1- x_{(i)})}{1-\theta} \mathbf{1}_{[\theta,1]} (x_{(i)}) \right)$$

$\textbf{Case 1:}$ $\theta \in [0, x_{(1)}]$

$$l(\theta;x) = \sum_{i=1}^n \ln \left ( \frac{2(1- x_{(i)})}{1-\theta} \right) = n \ln 2 + \sum_{i=1}^n \ln (1- x_{(i)}) - n \ln (1-\theta)$$

It follows that $$\underset{\theta \in [0, x_{(1)}]}{\text{arg max}} \,\, l(\theta;x) = x_{(1)} \quad \text{and} \quad \max_{\theta \in [0, x_{(1)}]} l(\theta;x) = n \ln 2 + \sum_{i=1}^n \ln (1- x_{(i)}) - n \ln (1-x_{(1)})$$

$\textbf{Case 2:}$ $\theta \in [x_{(k)}, x_{(k+1)}]$

$$\begin{aligned} l(\theta;x) &= \sum_{i=1}^k \ln \left ( \frac{2 x_{(i)}}{\theta} \right) + \sum_{i=k+1}^n \left (\frac{2(1- x_{(i)})}{1-\theta} \right) \\ &= n \ln2+ \sum_{i=1}^k \ln(x_{(i)} ) + \sum_{i=k+1}^n \ln( 1-x_{(i)} ) -k \ln \theta - (n-k) \ln (1 - \theta)\end{aligned}$$

$\textbf{Case 3:}$ $\theta \in [x_{(n)}, 1]$

$$l(\theta;x) = \sum_{i=1}^n \ln \left ( \frac{2x_{(i)}}{\theta} \right) = n \ln 2 + \sum_{i=1}^n \ln (x_{(i)}) - n \ln (\theta)$$

It follows that $$\underset{\theta \in [x_{(n)},1]}{\text{arg max}} \,\, l(\theta;x) = x_{(n)} \quad \text{and} \quad \max_{\theta \in [x_{(n)}, 1]} l(\theta;x) = n \ln 2 + \sum_{i=1}^n \ln (x_{(i)}) - n \ln (x_{(n)})$$


My question:

In case 2, the critical point is $\theta' = k/n$. But I don't know if $k/n \in [x_{(k)}, x_{(k+1)}]$. Hence I'm unable to find the maximizer in this case. Even if I found it, I'm still unable to compare the values of $l(\theta';x)$ between 3 cases. How can I proceed to find $$\underset{\theta \in [0, 1]}{\operatorname{arg max}} l(\theta;x) \text{ ?}$$

Thank you so much!

1

There are 1 best solutions below

0
On BEST ANSWER

$$ \ell(\theta) = \sum_{i\,=\,1}^{k(\theta)} \ln \left ( \frac{2 x_{(i)}}{\theta} \right) + \sum_{i\,=\,k(\theta)+1}^n \left (\frac{2(1- x_{(i)})}{1-\theta} \right) $$ where $k(\theta) = {}$how many of $x_i$ are $<\theta.$

\begin{align} \ell(\theta) = {} & \sum_{i\,=\,1}^{k(\theta)} \ln \left ( \frac{2 x_{(i)}} \theta \right) + \sum_{i\,=\,k(\theta)+1}^n \left (\frac{2(1- x_{(i)})}{1-\theta} \right) \\[10pt] = {} & n \ln2+ \sum_{i=1}^k \ln(x_{(i)} ) + \sum_{i=k+1}^n \ln( 1-x_{(i)} ) \\ & {} -k \ln \theta - (n-k) \ln (1 - \theta) \\[10pt] = {} & \text{constant} - k(\theta)\ln\theta - (n-k(\theta))\ln(1-\theta) \end{align} where here one must remember that in this context "constant" means not depending on $\theta.$

Next we have $$ \ell\,'(\theta) = \frac{k(\theta)}\theta - \frac{n-k(\theta)}{1-\theta} $$ where this does not hold at the endpoints, and one must remember that $k(\theta)$ is piecewise constant, so that it is treated as constant within the interior of each interval separately.

Now a very odd thing happens: As a function of $\theta,$ assuming both $k$ and $n-k$ are positive, this derivative is positive if $\theta>k/n$ and negative if $\theta<k/n$ rather than the other way around. Thus $L(\theta)$ gets bigger as you move away from $k/n$ in either direction. Or else something I've done above is wrong. So maybe this is one of those examples that show that this method doesn't always yield good results. Lucien Lecam wrote an article titled "Maximum Likelihood: An Introduction" or something like that, in which he exhibited a variety of such examples. Maybe I'll look at this again tomorrow.

At any rate, this is not the sort of problem where you just unthinkingly seek a critical point.