Maximum likelihood estimation for variance from the data

51 Views Asked by At

For the maximum-likelihood, I have set $l(\theta) = \sum_{k=1}^{n} ln (p(x_{k}|\theta))$ and $\Delta_{\theta}l(\theta) = \sum_{k=1}^{n} ln (p(x_{k}|\theta))$

Suppose I have a log probability $p(x_{k}|\theta) = -\frac{1}{2}ln (2\pi\theta_{2}) - \frac{1}{2\theta_{2}}(x_{k}-\theta_{1}))^2$

Partial derivative of the log probability w.r.t $\theta_{2}$ is $-\frac{1}{2\theta_{2}} + \frac{(x_{k}-\theta_{1})^2}{2(\theta_{2})^2}$

Insert the above partial derivative into pre-defined $\Delta_{\theta}l(\theta)$.

Then I can come up with $ -\sum_{k=1}^{n}\frac{1}{\Theta_{2}}+\sum_{k=1}^{n}\frac{(x_{k}-\Theta_{1}))^{2}}{(\Theta_{2})^{2}} = 0$ for the variance and assume all variables are scalar.

Can anyone please help me how to simplify the equation above into $\sigma^{2} = \frac{1}{n} \sum_{k=1}^{n}(x_{k}-\mu)^2$?

Mean is $\mu = \frac{1}{n} \sum_{k=1}^{n}(x_{k})$

1

There are 1 best solutions below

0
On

Considering your log probability it is clear that your population is a Gaussian

$$X\sim N(\theta_1;\theta_2)$$

thus your likelihood is

$$L(\underline{\theta})\propto \theta_2^{-n/2}\text{exp}\left\{ -\frac{1}{2\theta_2}\sum_k(X_k-\theta_1)^2 \right\}$$

taking its log and derivating w.r.t. $\theta_2$ you get the likelihood score:

$$-\frac{n}{2\theta_2}+\frac{\sum_k(X_k-\theta_1)^2}{2\theta_2^2}$$

that is, setting it $=0$ and solving in $\theta_2$,

$$\hat{\theta}_2=\frac{1}{n}\sum_k(X_k-\theta_1)^2$$

If $\theta_1$ is not known, first apply the same reasoning to $\theta_1$ finding its estimate: $\hat{\theta}_1=\overline{X}_n$