I have some trouble with the MLE of this distribution.
Let $X_1, \cdots, X_n$ random variables with distribution $\mathcal{U}[\theta, 1]$
$L(\theta) = \prod\limits_{i=0}^n \frac{1}{1-\theta} \ 1_{\{\theta<X_i<1\}}$
$\implies L(\theta) = \cases{ \frac{1}{\left(1-\theta\right)^n}\qquad \text{if } \max{\{X_i\}} < 1\ \ \text{ and }\ \min{\{X_i\}} > \theta \\ \\0 \qquad \qquad \text{otherwise}}$
Now here is my problem. I cannot use $\min{\{X_i\}}$ mas my estimator because that is a minimum of $L(\theta)$ if $\max{\{X_i\}}<1$. And so I thought of using $\max{\{X_i\}}$, but when I find the density of this new random variable, I find the MSE (to check consistency) and it doesn't go to 0.
After some calculations I end up with
$\text{MSE} = \frac{-n(2\theta -1)}{(n+2)(1-\theta)^n} + \theta^2\left( 1 + \frac{n\theta^n}{(n+2)(1-\theta)^n} \right)$
What am I thinking wrong?
Since $\theta$ is the lower endpoint of the uniform distribution from which observations are drawn, it seems natural to estimate $\theta$ using the minimum order statistic $X_{(1)} = \min_i X_i$. The likelihood is $$\mathcal L(\theta \mid \boldsymbol x) = (1-\theta)^{-n} \mathbb 1(\theta \le x_{(1)} \le x_{(n)} \le 1).$$ Remember, $\mathcal L$ is fixed with respect to the sample $\boldsymbol x$, and is a function of $\theta$. You are correct that for positive integers $n$ and $\theta < 1$, the factor $(1-\theta)^{-n}$ is increasing as a function of $\theta$; but this is precisely the point of the indicator function: $\mathcal L$ only increases up until $\theta > x_{(1)}$, in which case the likelihood drops to $0$ because it is impossible for $\theta$ to exceed the minimum observed value. Consequently, the likelihood is maximized for a choice of $\theta$ that is as large as we can make it, which in this case is $$\hat \theta = x_{(1)}.$$
As for the consistency of $\hat \theta$, we again start from intuition: it stands to reason that if $X_i$ are uniformly distributed on $[\theta, 1]$, that the more observations you take, the more likely that the sample minimum will tend toward $\theta$. In fact, you can write this probability explicitly by computing the CDF of $X_{(1)}$: $$F_{X_{(1)}}(x) = 1 - \Pr[X_{(1)} > x] = 1 - \Pr\left[\bigcap_{i=1}^n X_i > x \right] = 1 - \prod_{i=1}^n \Pr[X_i > x] = 1 - \left(\frac{1-x}{1-\theta}\right)^n.$$ This implies that for any $0 < \epsilon < 1-\theta$, $$\lim_{n \to \infty} \Pr[X_{(1)} > \epsilon + \theta] = \lim_{n \to \infty} \left(\frac{1 - \epsilon - \theta}{1 - \theta}\right)^n = \lim_{n \to \infty} \left(1 - \frac{\epsilon}{1-\theta}\right)^n = 0.$$