How to find the maximum likelihood estimator when you have two random variable with two variances but one mean?

533 Views Asked by At

Can someone explain this answer to me? I would understand how to find the maximum likelihood if it were only one random variable and a variance but what happens here? I understand I have to equal it to zero to find the peak but whats up with the min and max? Why take the arg of it? How did they get to equation 20 and why?

The assignment with the answer at the bottom

1

There are 1 best solutions below

0
On BEST ANSWER

Because there are two independent random variables, their joint PDF is the product of the individual PDFs, which is where (19) comes from. Since the $\mu$ are the same, each of the individual PDFs in the joint have the same $\mu$.

Since the sample consists of only one point from the random vector, i.e. one sample from each of the two independent random variables, the likelihood function is the same as the joint PDF.

Taking the arg max of the likelihood function is the definition of a maximum likelihood estimate. You are finding which value of the parameter would maximize the likelihood.

For equation (20), they are finding which value of $\mu$ maximizes the likelihood function. They ignore the constants multiplying the exponentials, since they don't have $\mu$ in them (which is why I called them constants, we are maximizing over $\mu$).

In their next step, they do two things:

Because the exponential function is monotonic, you can maximize/minimize it by maximizing/minimizing its argument, which is why they drop the exponential function.

Also, maximizing a quantity is the same as minimizing the negative of that quantity, so they take the negative and switch from arg max to arg min.

That's how they get (20).

From that point, as you have said, they find the min by setting the derivative equal to zero.