In his book, Mathematical statistics, Wilks uses several times an argument that is a bit obscure to me (I'm referring to page 411 of the book).
Basically, we have a sequence of roots of the maximum likelihood equation such that $\hat \theta_n \to \theta_0$ almost surely and then he does a Taylor expansion of the form: $$\sum log(f(x_i|\theta)) = \sum log f(x_i|\hat \theta_n)+ (\theta_0 - \hat \theta_n)^2/2 \sum \frac{d^2}{d\theta^2} log f(x_i| \theta^*)$$ with $|\theta_0 - \theta^*| < |\theta_0 - \hat \theta_n|$ where the first term disappears since $\hat \theta_n$ is the mle. Then he states:
$\forall \epsilon > 0. \exists n_{\epsilon}.\forall n \ge n_{\epsilon}. P[\text{ above equality holds}] > 1 - \epsilon$.
I think that this should follow from the almost surely convergence but how can I prove it?