Let's consider i.i.d. sample from distribution $$f(x, \theta) = \theta(\theta+1)x^{\theta-1}(1-x) 1_{\{(0, 1)\}}(x)$$
Let's derive asymptotic variance of MME estimator:
MME estimator is given as $$\hat{\theta} = \frac{2\bar X}{\bar X - 1}$$
Lets define function $g := \frac{2x}{x-1}$, then $g'(x) = - \frac{2}{(x-1)^2}$
We know from delta rule that $\sqrt{n}(g(\bar X) - g(EX)) \rightarrow N(0, Var(X) g'(EX)^2)$ since $\sqrt n (\bar X - EX) \rightarrow N(0, VarX)$.
So the asymptotic variance of MME is given by:
$$VarX\cdot (g'(EX))^2 = \frac{\theta(\theta+2)^2}{2(\theta +3)}$$
I'm skipping calculations of $Var(X) = E[X^2] - [E(X)]^2$ since its not nothing more than calculating integrals.
What I want to do is to calculate asymptotic relative efficiency of two estimators - its an object given as:
$$e(\theta, T_1, T_2) = \frac{\sigma_2^2(\theta)}{\sigma_1^2(\theta)}$$
where $\sigma_i^2(\theta) := \lim_{n \rightarrow \infty} nVarT_i$. So in my case $\sigma_{\text{MME}}(\theta) = \lim_{n \rightarrow \infty} n Var(\hat{\theta}) = \infty$. So for any other estimator $T_2$ that has $\sigma_2(\theta)$ finite asymptotic relative efficiency will equal to $e(\theta, T_1, T_2) = 0$. Do I understand this correctly? How should I interpret this result?
EDIT
Thank you very much for the answer which motivated to for one more question about thing that I don't understand. I want to do exactly the same thing for maximum likelihood estimator (i.e. to derive asymptotic normality):
$$L(X, \theta) = \theta^n (\theta+1)^n (x_1x_2...x_n)^{\theta-1} (1-x_1)(1-x_2)...(1-x_n)1_{(0,1)}(x_1)1_{(0,1)}(x_2)...1_{(0,1)}(x_n)$$ $$\ln L(X, \theta) = \ln(\theta^n) + \ln(\theta+1)^n + \ln(x_1...x_n)^{\theta-1}+\ln((1-x_1)...(1-x_n)) + \ln(1_{(0,1)}(x_1)...,1_{(0,1)}(x_n))$$ $$\ln L(X,\theta) = n\ln(\theta) + (\theta - 1)\ln(x1,...,x_n) + \ln((1-x_1)...(1-x_n)) + \ln(1_{(0,1)}(x_1)...,1_{(0,1)}(x_n))$$ $$\frac{\partial \ln L(X, \theta)}{\partial \theta} = \frac{n}{\theta} + \frac{n}{\theta} + \ln(x_1,...x_n)$$ $$\frac{\partial^2\ln L(X, \theta)}{\partial^2\theta} = -\frac{n}{\theta} - \frac{n}{(\theta + 1)^2}$$
Fisher information matrix is given as $$I(\theta) = -E[\frac{n}{\theta^2} - \frac{n}{(\theta + 1)^2}] = \frac{n}{\theta^2} + \frac{n}{(\theta + 1)^2} = \frac{n(\theta+1)^2 + n\theta^2}{\theta^2(\theta+1)^2}$$
Asymptotic variance then is given as $\frac{1}{I(\theta)} = \frac{\theta^2(\theta+1)^2}{n(\theta+1)^2 + n\theta^2}$
And now, with this definition of $\sigma_{\text{MLN}}(\theta) = Var(\sqrt{n}T_{\text{MLE}}) = \frac{\theta^2(\theta+1)^2}{n(\theta+1)^2 + n\theta^2}$. And here I have the same problem - variance is depending on $n$, so the asymptotic relative efficiency will be always $\infty$. Am I thinking correctly?
You've used the delta method to show that $\sqrt{n}\hat{\theta}$ has asymptotic variance $\frac{\theta(\theta+2)^2}{2(\theta+3)}$; thus, this quantity is $\lim_{n \to \infty} n \text{Var}(\hat{\theta}) = \lim_{n \to \infty} \text{Var}(\sqrt{n} \hat{\theta})$.