For iid stochastic variables $X_1, ..., X_n$ whose distribution is defined by 2 parameters, I have found the MLE estimators. They are $\hat{\mu} = \sum x_i/n$, and $\hat{\lambda}$ given by $$ \frac{n}{2} \left(\sum_{i=1}^n \frac{( x_i - \hat{\mu})^2}{2\ \hat{\mu}^2 x_i}\right)^{-1}$$ where we plug in the MLE estimator $\mu$.
It is hinted that $EX = \mu$, $E(1/X) = 1/\mu + 1/\lambda$, $VARX = \mu^3/\lambda$, $VAR1/X = \frac{1}{\mu \lambda} + \frac{2}{\lambda^2},$ and $COV(X,1/X) = -\mu/\lambda$.
How do I show that these estimators converge in probability to the actual parameters? For $\hat{\mu}$, I believe it follows easily from Chebyshev's inequality, as $\hat{\mu}$ has the actual $\mu$ as mean yet diminishing variance. In order to use the same strategy for $\hat{\lambda}$, I need to calculate the variance and mean of that ugly looking thing above, and I don't see how. Is this really the way to go, or is there another strategy?
I don't think I have enough of your problem to answer that question, but if you look at Huber's Robust Statistics or Keener's Theoretical Statistics, the sections on M-estimators might help.
For example, in Keener, the maximum likelihood estimator is characterized as when the derivative of the likelihood equals 0. They are able to do proofs using a Taylor series expansion around that point.
(They use a version of the Taylor series theorem that's not often used, where $f(y) = f(x) + (y - x) f'(c)$ for some $c \in [x,y]$, where the equation holds with equality. Then, $c$ is sandwiched between $y$ and $x$, and if $y \rightarrow x$ in probability then $c \rightarrow x$.)
Hope that helps!