Let $X \sim \mathrm{Gamma}(\alpha, \theta),$ where $$f(x) = \frac {x^{\alpha - 1} e^{-\frac x \theta}} {\theta^{\alpha}\Gamma(\alpha)}.$$ The log-likelihood function can be shown to be $$l(\alpha, \theta) = -n\alpha\ln\theta - n\ln[\Gamma(\alpha)] + (\alpha - 1)\sum^n_{i = 1} \ln x_i - \theta^{-1}\sum^n_{i = 1} x_i.$$
Now, suppose we reparameterise $X$ by introducing $$\mu = \alpha\theta$$ and the log-likelihood function can be shown to be $$l(\alpha, \mu) = n\alpha\ln\alpha -n\alpha\ln\mu - n\ln[\Gamma(\alpha)] + (\alpha - 1)\sum^n_{i = 1} \ln x_i - \alpha\mu^{-1}\sum^n_{i = 1} x_i.$$
I am interested in deriving the Fisher information matrix for the reparameterised Gamma distribution which was given to me as $$\begin{aligned} I(\alpha, \mu) & = - \begin{bmatrix} \mathbb{E}_{\alpha, \mu}\left(\frac {\partial^2l(\alpha, \mu)} {\partial\alpha^2} \right) & \mathbb{E}_{\alpha, \mu}\left(\frac {\partial^2l(\alpha, \mu)} {\partial\alpha\partial\mu} \right) \\ \mathbb{E}_{\alpha, \mu}\left(\frac {\partial^2l(\alpha, \mu)} {\partial\alpha\partial\mu} \right) & \mathbb{E}_{\alpha, \mu}\left(\frac {\partial^2l(\alpha, \mu)} {\partial\mu^2} \right) \end{bmatrix} \\ & = \begin{bmatrix} n\alpha^{-1}\left(\alpha\frac {d\psi(\alpha)} {d\alpha} - 1\right) & 0 \\ 0 & n\alpha\mu^{-2} \end{bmatrix}, \end{aligned}$$ where $\psi(\alpha)$ is the digamma function.
In particular, I am having issues deriving $$I_{(2, 2)}(\alpha, \mu) = n\alpha\mu^{-2}.$$ It is trivial to obtain the relevant first and second order derivatives of the reparameterised log-likelihood function which are $$\frac {\partial l(\alpha, \mu)} {\partial\mu} = -n\alpha\mu^{-1} + \alpha\mu^{-2}\sum^n_{i = 1} x_i$$ and $$\frac {\partial^2l(\alpha, \mu)} {\partial\mu^2} = n\alpha\mu^{-2} - 2\alpha\mu^{-3}\sum^n_{i = 1} x_i.$$
However, I am unsure how taking the expectation of my last expression above will give $-n\alpha\mu^{-2}$ such that it reconciles with the result I need.
Any intuitive explanations will be greatly appreciated!
I am answering my own question since I have figured it out and no one else was able to assist; it actually turned out to be rather simple!
In particular, after deriving $\frac {\partial^2l(\alpha, \mu)} {\partial\mu^2}$, take expectations on both sides to get $$\begin{aligned} \mathbb{E}_{\alpha, \mu}\left(\frac {\partial^2l(\alpha, \mu)} {\partial\mu^2} \right) & = \mathbb{E}\left(n\alpha\mu^{-2} - 2\alpha\mu^{-3}\sum^n_{i = 1} x_i \right) \\ & = n\alpha\mu^{-2} - 2\alpha\mu^{-3}\mathbb{E}\left(\sum^n_{i = 1} x_i \right) \\ & = n\alpha\mu^{-2} - 2\alpha\mu^{-3}n\mu \\ & = -n\alpha\mu^{-2}, \end{aligned}$$ which is the desired solution.