Constrained maximum likelihood for binomial parameters

461 Views Asked by At

Suppose I have independent, binomial random variables, $X\sim Bin(n,\theta_X)$ and $Y\sim Bin(n,\theta_Y)$, where $n$ is known and $0<\theta_X,\theta_Y<1$ are unknown. I would like to find the maximum likelihood estimates for $\theta_X$ and $\theta_Y$ based on the joint likelihood of $X$ and $Y$, $\mathcal{L}(\theta_X,\theta_Y|X,Y)$, subject to the constraint that $\theta_X + \theta_Y=c$, $c\in(0,2)$ is known.

Now the standard approach is via a Langrange multiplier, $\lambda$. The objective function becomes: $$ \Lambda = \ln \mathcal{L}(\theta_X,\theta_Y|X,Y) + \lambda(\theta_X+\theta_Y-c) $$

Differentiating $\Lambda$ with respect to $\theta_X$, $\theta_Y$ and $\lambda$ and setting each result to zero yields three equations with three unknowns. However, even in this relatively simple case, the analytical solutions for $\theta_X$ and $\theta_Y$ are reasonable, but quite unwieldy (obtained using Mathematica). I would eventually like to extend this to arbitrary number of $\theta$'s subject to $\sum \theta_i = c$.

Is there a simpler approach? Am I resigned to using numerical methods to determine the constrained MLEs (assuming they exist)?

EDIT

Here is the system of equations I used:

\begin{matrix} \frac{\partial\Lambda}{\partial\theta_X}=\frac{x}{\theta_X}-\frac{n-x}{1-\theta_X}+\lambda =0\\ \frac{\partial\Lambda}{\partial\theta_Y}=\frac{y}{\theta_Y}-\frac{n-y}{1-\theta_Y}+\lambda =0\\ \frac{\partial\Lambda}{\partial\lambda} = \theta_X + \theta_Y - c =0 \end{matrix}