Let $X$ be a random variable with distribution $\operatorname{Unif}(0, \theta)$. Draw IID $X_1, X_2, \ldots, X_n$ and calculate the moment method estimator. Compare it with $T_n = \min_i \{ X_i \} + \max_i \{ X_i \}$.
Attempt: We easily find $$\theta' = \frac {2(X_1 + \cdots + X_n)}{n}. $$ I received the broad question as "compare the variances of the two estimators" at finite or at least asymptotically, but I can't compute $V(T_n)$. Is it possible? I can say $V(T_n) > V(\max_i \{ X_i \})$ because $\min_i \{ X_i \}$ and $\max_i \{ X_i \}$ should be positively correlated, and I'm able to find the distribution of $\max_i \{ X_i \}$ and compute its variance.
Thanks!

The distribution of either $\max$ or $\min$ is easy; the distribution of $\max+\min$ is more involved.
$$ \max\{X_1,\ldots,X_n\} \le x \text{ if and only if } (X_1\le x\ \&\ \cdots\ \&\ X_n\le x) $$ and by independence, the probability of that is the $n$th power of the probability of $X_1\le x.$ Thus it is $ (x/\theta)^n.$ The density is the derivative of that with respect to $x.$ The density of $\min$ is found similarly. But they are positively correlated.
We have $f_{\min}(x) = \dfrac n {\theta^n} (\theta-x)^{n-1}.$
Let $I= \text{the index $i$ in } \{1,\ldots,n\} \text{ for which } X_i= \min.$ Then \begin{align} \Pr(\max+\min\le x) & = \operatorname E(\Pr(\max+\min\le x \mid \min, I)) \\[10pt] & = \operatorname E(\Pr(\max\le x-\min\mid \min,I)) \\[10pt] & = \operatorname E\left( \left( \frac{(x - \min)-\min}{\theta - \min}\right)^{n-1} \right) \\[10pt] & = \int_0^\theta \left( \frac{x-2u}{\theta-u} \right)^{n-1} \frac n {\theta^n} (\theta-u)^{n-1} \, du \\[10pt] & = \frac n {\theta^n} \int_0^\theta (x-2u)^{n-1} \, du \end{align} and this equals something depending on $x$ and $\theta.$ Diffentiating with respect to $x$ gives the density of $\max-\min.$
The best unbiased estimator of $\theta$ in these circumstances is $\dfrac{n+1} n \max\{X_1,\ldots,X_n\}.$ Since the conditional distribution of $\min$ given $\max$ does not depend on $\theta,$ bringing $\min$ into the estimation process after $\max$ is already there just adds noise and makes the variance bigger.