Why exponentiation when summing log likelihoods?

35 Views Asked by At

I have a problem with a derivation I found.

Imagine I have a probability density function for a true set of point correspondences $\mathbf{D}$ with the number $n$ in two 2D coordinate systems and their appropriate geometric relation $\mathbf{M}$ in the gaussian form

$\text{Pr}(\mathbf{D}|\mathbf{M}) = \prod_{i=1}^n (1/\sqrt{2\pi\sigma^2})^n \cdot exp(-e_i/(2\sigma^2))$,

where $e$ denotes a residual that can be computed.

Now I combine this term with an uniform distribution for outliers, so that the pdf for $e$ and one point match can be described as:

$\text{Pr}(e) = (\gamma \cdot 1/\sqrt{2\pi\sigma^2}\cdot exp(-e/(2\sigma^2)) + (1-\gamma)\cdot1/v)$,

where $\gamma$ and $v$ are two parameters that can be computed.

Now I use the negative log likelihood $-L$ as a cost function for all point correspondences resulting in

$-L = - \sum_i^n log( \gamma \cdot ( 1/\sqrt{2\pi\sigma^2} )^n\cdot exp(-e_i/(2\sigma^2)) + (1-\gamma)\cdot1/v)$.

My question is now why the exponent $n$ occurs for $\text{Pr}(D|M)$ and the summed $-L$? Could it be that there is actually a mistake for $\text{Pr}(D|M)$?

Because I thought, for $n$ independent and identical normally distributed variables, there shouldn't be an exponent $n$ when there is a multiplication for $\text{Pr}(D|M)$?

And how can I proof that $( 1/\sqrt{2\pi\sigma^2} )^n$ is valid for the computation of $-L$?