Convergence of the Expectation-Maximization algorithm

33 Views Asked by At

Studying the Expectation-Maximization algorithm, I noticed that I couldn't find any proof that the parameters actually converge, nor that the limit is a local extremum of the likelihood (or even just a point where the gradient vanishes). The proofs that I found use the argument that at each step the likelihood increases, and thus as it is a monotonically increasing bounded sequence it must converge. But my 2 questions remain unanswered:

  1. Why does the convergence of the values of the likelihood function imply the convergence of the parameters?
  2. Assuming the parameters converge, why do they converge to a local extremum of the likelihood function?

I would love to see a proof, or even a good reference will do. Thanks