Let $\theta>1$ be an unknown parameter and let $X_1, X_2, ..., X_n$ be a random sample (which means i.i.d. in this case) from the density $f_\theta$ where $$f_{\theta}(x)=x\theta^{-\frac{x^2}{2}}\log(\theta)\mathbb{1}_{(0, \infty)}(x).$$ We are given that $\mathbb{E}_{\theta}[X_1^2]=\frac{2}{\log(\theta)}$ and $\mathbb{E}_{\theta}[X_1^4]=\frac{8}{(\log \theta)^2}$.
First, I was asked to compute the maximum likelihood estimator of $\theta$, call it $\hat{\theta}_n$. This isn't hard, I got $$\hat{\theta}_n=e^{\frac{2n}{\sum_{i=1}^n X_i^2}}.$$ What I don't know how to do is how to prove if this estimator is efficient, i.e. if the Rao-Cramer bound is attained. To do so I need to find the expected value of my estimator and its variance. But how would I do this? I have no idea what is the distribution of $\sum_{i=1}^n X_i^2$ to even be able to get started on this using the so called law of unconscious statistician. So, how is this supposed to be done?
Note that this is a regular one-parameter exponential family of distributions. Taking $\ln\theta = a$ (say), it is easy to see that $X^2$ has an exponential distribution when $X$ has the pdf $f_{\theta}$.
The likelihood for $x_1,\ldots,x_n>0$ is
$$L(\theta)=\left(\prod_{i=1}^n x_i\right)\theta^{-\sum_{i=1}^n x_i^2/2}(\ln \theta)^n \quad,\,\theta>1$$
So, log-likelihood is
$$\ell(\theta)=n\ln(\ln\theta)-\left(\frac12\sum_{i=1}^n x_i^2\right)\ln\theta + \sum_{i=1}^n \ln x_i$$
The score function is therefore
$$\ell'(\theta)=\frac{n}{\theta\ln\theta}-\frac1{2\theta}\sum_{i=1}^n x_i^2$$
Or,
$$\ell'(\theta)=-\frac{n}{\theta}\left[\frac1{2n}\sum_{i=1}^n x_i^2 - \frac1{\ln\theta}\right] $$
The last equation is of the form $\ell'(\theta)=k(\theta)(T(\boldsymbol x)-g(\theta))$, which is the equality condition of Cramer-Rao inequality. And this shows that only functions of the form $g(\theta)=\frac1{\ln \theta}$ and constant multiples of it admit estimators whose variance attains the Cramer-Rao bound. So by your definition of efficiency, there does not exist any efficient estimator of $\theta$.
However, in this setup, maximum likelihood estimators are known to be asymptotically efficient. That is to say, the large sample variance of your MLE does attain the Cramer-Rao bound, which is the inverse of Fisher information.