Let $X_1,..,X_n$ be a random independent sample $X_1,…,X_n$ from a Uniform$[0,\theta] $ distribution, $\theta \in [0, \infty)$, with probability density function
$f(x;\theta) =
\begin{cases}
1/\theta, & 0 \le x \le \theta \\
0, & \text{otherwise}
\end{cases}$ and
$X_{(n)}=max(X_1,X_2,…,X_n)$ and $X_{(1)}=min(X_1,...,X_n)$.
I know that $X_{(1)}$ has the same distribution as $\theta - X_{(n)}$
and that $\hat{\theta}=X_{(1)}+X_{(n)}$ is an unbiased estimator of θ.
I want to show that $Var(X_{(1)}+X_{(n)}) \le 4Var(X_{(n)})$.
My work: I know that $Var(X_{(n)})= \frac{\theta^2n}{(n+1)^2(n+2)}$.
Next: $Var(X_{(1)}+X_{(n)}) = E[(X_{(1)}+X_{(n)})^2]-E[X_{(1)}+X_{(n)}]^2= E[X_{(1)}^2]+E[2X_{(1)}X_{(n)}]+E[X_{(n)}^2]-\theta^2= E[X_{(1)}^2]+E[2(\theta-X_{(n)})X_{(n)}]+E[X_{(n)}^2]-\theta^2 =E[X_{(1)}^2]+2\theta E[X_{(n)}]-E[X_{(n)}^2]-\theta^2.$
I also calculated that: $E[X_{(1)}^2]= \frac{2\theta^2}{(n+1)(n+2)}$ and $E[X_{(n)}^2]=\frac{2\theta^2}{(n+2)}$.
I see no other way than direct calculation of a variance.
Third equality in your calculation of the variance of the sum of order statistics is wrong. Yes, $X_{(1)}$ has the same distribution as $\theta - X_{(n)}$, and $X_{(1)}X_{(n)}$ does not have the same distribution as $(\theta-X_{(n)})X_{(n)}$.
You can calculate either $\mathbb E[(X_{(1)}+X_{(n)})^2]$ or $\mathbb E[X_{(1)}X_{(n)}]$ using pdf of joint distribution of $X_{(1)}$ and $X_{(n)}$: $$ f_{X_{(1)},X_{(n)}}(x,y) = \frac{n(n-1)(y-x)^{n-2}}{\theta^n}\cdot \mathbb 1_{\{0\leq x\leq y\leq \theta\}} $$ The resulting variance should be $Var(X_{(1)}+X_{(n)})=\frac{2\theta^2}{(n+1)(n+2)}$.