Consider the following data from a sample of a binomial distribution $X$ with $n=2$ and unknown parameter $p$: $$P(X=0)=.175,\ \ P(X=1)=.45,\ \ \text{and} \ \ P(X=2)=.375.$$ My goal is to find the maximum likelihood estimate of $p.$
First, I found the likelihood function by multiplying the distributions at each $X_{i}$, that is, $$L(p)=\binom{2}{0}p^0(1-p)^2 \cdot \binom{2}{1}p(1-p) \cdot \binom{2}{2}p^2(1-p)^0,$$ which simplifies to $$L(p)=2p^3(1-p)^3.$$ Taking the logarithm of this gives $$l(p)=\log L(p) = 3\log (2p) + 3\log (1-p).$$ Then, differentiating with respect to $p$ and setting to $0$, we have $$\frac{\partial }{\partial p}l(p)= \frac{3}{p}-\frac{3}{1-p}=0,$$ which finally implies that the estimate of $p$ would be $\hat{p}=\frac{1}{2}.$ However, this is clearly not correct, since I did not even use the data provided in the calculation. Can anyone provide any help?
Thanks in advance!
Let me start stating that the question is somewhat unusual, because maximum likelihood estimates are usually used to estimate parameters based on data, which in this case should be counts for the three results of the experiment.
With data the usual case your likelihood should be:
$$L(p\mid \text{data}) = \left( (1-p)^2 \right)^{n_0} \left( 2p(1-p) \right)^{n_1} \left( p^2 \right)^{n_2}$$
with $n_0,n_1,n_2$ being the counts for $X=0,X=1,X=2$.
Then the log-likelihood will be:
$$l(p\mid \text{data}) = 2 n_0 \log(1-p) + n_1\left(\log(2)+ \log(p) + \log(1-p) \right) + 2n_2 \log (p)$$
This is a sum of $\log(p)$ and $\log(1-p)$ terms which will result in a relatively simple formula for $p$ when you solve for $p$ after setting the derivative to $0$ :
It should be: $$ \hat{p} = \frac{2n_2+n_1}{2n_0+2n_1+2n_2}$$
In your case one could search for the maximum of the expected log-likelihood by replacing the counts with their probabilities and solving in the same way.
$$ \hat{p} = \frac{2P(X=2)+P(X=1)}{2P(X=0)+2P(X=1)+2P(X=2)} = 0.6$$
, which is the maximum position of $E(\log(L))$.
BTW: Your starting equation is the likelihood for observing each outcome exactly once.