I am looking to derive the maximum likelihood estimator of the mixing parameters for "n" single-variable Gaussian distributions.
The complete model density is:
$$ f(x) = \sum_{k=1}^m \lambda_k f_k(x) $$
where $f_k$ is the normal distribution with mean $\mu_k$ and variance $\sigma_k^2$.
The data likelihood is:
$$ L(\mu_1,\dots,\mu_m,\sigma_1^2,\dots,\sigma_m^2,\lambda_1,\dots,\lambda_k) = \prod_{i=1}^n\prod_{k=1}^m f_k^{u_{ik}}(x_i)\lambda_k^{u_{ik}} $$
where $u_{ik}$ is an indicator variable of whether the "i-th" data point came from the "k-th" distribution.
So far, I have derived the MLEs of $\mu_k$ and $\sigma_k^2$, but when I try to calculate the MLE of $\lambda_k$, I get an incorrect result.
First, I calculate the log-likelihood:
$$ log(L) = \sum_{i=1}^n\sum_{k=1}^m u_{ik}log(f_k(x_i)) + u_{ik}log(\lambda_k) $$
Then, I fix a particular "k" and calculate the derivative:
$$ \begin{aligned} \frac{d}{d\lambda_k} log(L_k) &= \frac{d}{d\lambda_k} \sum_{i=1}^nu_{ik}log(f_k(x_i)) + u_{ik}log(\lambda_k) \\ &= \frac{d}{d\lambda_k}\sum_{i=1}^n u_{ik}log(\lambda_k) \\ &=\frac{1}{\lambda_k}\sum_{i=1}^n u_{ik} \\ \end{aligned} $$
Clearly, if I set this derivative equal to zero, I get no viable MLE, which I know is wrong. What is the correct way to derive the MLE?