I'm proving that mle is asymptitocally normal and I roughly follow the steps in this question. My problem now would be to proof that
$$Z_n' = \frac 1 n \frac{d^2}{d\theta^2} \log \ell(\theta^*\mid x) \text{ converges to } I(\theta_0)$$
Here $\theta^*$ is the point obtained between the real value of the parameter and the solutions of the likelihood equation $\hat \theta = \hat \theta(X_1,\ldots,X_n)$. $\ell$ is the likelihood function.
I clearly have that $\theta^* \to \theta_0$ almost surely. And as most texts cite Slutsky theorem I would like to apply what my notes state as Slutsky-Fréchet theorem, that is:
$X_n \to X$ in probability and $g$ continuous then $g(X_n) \to g(X)$ in probability.
So I could take $\frac{d^2}{d\theta^2} \log(\theta\mid x)$ as $g$, in fact, to have a unique solution of the likelihood equation, I impose that it is continuous uniformly on $\theta$. The problem is that, in this way $g$ depends on $n$ and therefore I cannot obtain the wanted result $I(\theta_0)$.
How do I solve this?
My thoughts
Apparently, this is further explained in Wilk's Mathematical Statistics 4.3.8. Someone dares to give a comprehensive approach to my situation?
There are some things that surprise me even in this proof. What is $g_l,g_u$? If $g$ is not continuous does it need to have a least upper bound and a greatest lowest bound? Also, how is that a stochastic process can have several variables (I read in Wikipedia that they need to be defined over the same probability space)?

Switching up the notation a little bit here to move $\theta_0$ and $\theta^*$ outside until after we take the derivative.
Write out $Z_n$ as: \begin{equation} Z_n = \frac{1}{n}\sum_{i=1}^n \frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*} \end{equation}
For now let's assume $\theta^*$ is a constant, because the $X_i$ are sampled from the true $\theta_0$, so it doesn't affect our approach until the end.
Using the law of large numbers we have:
\begin{equation} \frac{1}{n}\sum_{i=1}^n \frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*} \overset{a.s.}{\rightarrow} E_{\theta_0}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*}] \end{equation}
Using the dominated convergence property of conditional expectation we can rearrange the expectation to condition on a random $\theta^*$:
\begin{equation} \begin{split} E_{\theta_0|\theta^*}[\frac{1}{n}\sum_{i=1}^n \frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*} | \theta^*&] \overset{a.s.}{\rightarrow} E_{\theta_0|\theta^*}[ E_{\theta_0|\theta^*}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*}|\theta^*]|\theta^*]\\ \end{split} \end{equation}
Noting that:
\begin{equation} E_{\theta_0|\theta^*}[ E_{\theta_0|\theta^*}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*}]|\theta^*] = E_{\theta_0|\theta^*}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*} | \theta^*] \end{equation}
Is a random function of $\theta^*$, and is almost surely continuous in $\theta^*$.
Using your fact that $\theta^*\overset{a.s.}{\rightarrow}\theta_0$, by the continuous mapping theorem:
\begin{equation} \begin{split} E_{\theta_0|\theta^*}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*} | \theta^*] ]\overset{a.s.}{\rightarrow}E_{\theta_0|\theta_0}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta_0} | \theta_0] \end{split} \end{equation}
Given that the convergence is a.s. we are conditioning on a set of probability one, and can remove the conditioning to get:
\begin{equation} \begin{split} E_{\theta_0|\theta_0}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta_0} | \theta_0] = E_{\theta_0}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta_0}] \end{split} \end{equation}
Which is, up to a minus sign, the Fisher information.
Edit: I need to show that the dominated convergence theorem holds.
Let $X_n = \frac{1}{n}\sum_{i=1}^n \frac{d}{d\theta^2}[\textrm{log}f(X_i|\theta)]|_{\theta^*}$.
We need to find $M$ such that $||X_n||_{L_1}\leq ||M||_{L_1}$ almost surely in $P_{\theta_0}$ probability. Taking the supremum to find $x',\theta' = \sup_{x,\theta^*} ||\frac{d}{d\theta^2}[\textrm{log}f(X_i|\theta)]|_{\theta^*}||_{L_1}$.
Then, specifying $M = \frac{d}{d\theta^2}[\textrm{log}f(x'|\theta)]|_{\theta'}$, we have:
\begin{equation} ||X_n||_{L_1} \leq \frac{1}{n}\sum_{i=1}^n ||M||_{L_1} = \frac{n}{n}||M||_{L_1} = ||M||_{L_1} \end{equation}