Given $f(x;θ_{1},θ_{2})=\frac{1}{(θ2-θ1)}$ when $θ_{1}\leq x\leq θ_{2}$ and $0$ otherwise.
How would I find the MLE? I know you're supposed to take the $log$ of the likelihood function and then the derivative, but I honestly have no idea where to start. It looks like a foreign language to me... Can anyone help me with this?
So I think I start by taking the likelihood function, which is $L(x;θ_{1},θ_{2}) = -n*ln(θ_{2}) + n*ln(θ_{1})$. Is that right?
Suppose we have a sample $\boldsymbol x = (x_1, \ldots, x_n)$ of independent and identically distributed observations from the distribution $$f(x \mid \theta_1, \theta_2) = \frac{1}{\theta_2 - \theta_1} \mathbb 1 (\theta_1 \le x \le \theta_2).$$ Then the joint distribution of the sample given the parameters is $$f(\boldsymbol x \mid \theta_1, \theta_2) = \prod_{i=1}^n f(x_i \mid \theta_1, \theta_2) = (\theta_2 - \theta_1)^{-n} \mathbb 1(\theta_1 \le x_{(1)} \le x_{(n)} \le \theta_2),$$ where $x_{(1)} = \min_i x_i$ is the first order statistic, and $x_{(n)} = \max_i x_i$ is the last order statistic. Thus the joint likelihood of $\theta_1, \theta_2$ given the sample is $$\mathcal L(\theta_1, \theta_2 \mid \boldsymbol x) = (\theta_2 - \theta_1)^{-n} \mathbb 1(\theta_1 \le x_{(1)} \le x_{(n)} \le \theta_2),$$ and the log-likelihood is $$\ell(\theta_1, \theta_2 \mid \boldsymbol x) = -n \log(\theta_2 - \theta_1) + \log \mathbb 1(\theta_1 \le x_{(1)} \le x_{(n)} \le \theta_2).$$ The first thing to note is that $\boldsymbol T(\boldsymbol x) = (x_{(1)}, x_{(n)})$ is a sufficient statistic for $\theta_1, \theta_2$, so our estimator should be based on this statistic. Second, we note that our choice of $\theta_1$ and $\theta_2$ are constrained to satisfy $\theta_1 \le x_{(1)}$ and $\theta_2 \ge x_{(n)}$. Under these conditions, we search for critical points of the log-likelihood: $$\frac{\partial \ell}{\partial \theta_1} = \frac{n}{(\theta_2 - \theta_1)^{n+1}},$$ and it is clear that there are no critical points for $\theta_1$, as the derivative is strictly positive for all valid $\theta_1 < \theta_2$ and $n \in \mathbb Z^+$. Since the derivative is strictly positive, $\ell$ with respect to $\theta_1$ is maximized when $\theta_1$ is chosen to be as large as possible. Similarly, differentiating with respect to $\theta_2$ shows that $\ell$ is a decreasing function of $\theta_2$, thus is maximized when $\theta_2$ is as small as possible. We conclude that our MLE must be $$(\hat \theta_1, \hat \theta_2) = (x_{(1)}, x_{(n)}).$$
All of this, of course, should make intuitive sense: suppose I generated the sample $$\{3.48275, 4.80187, 4.18071, 4.63442, 2.99332, 5.21372, 4.48195, 3.34479, 3.46628, 4.3052, 4.62014, 3.70395, 2.71891, 3.66302, 2.24082, 3.61132, 4.25884, 5.00934, 4.03281, 4.83299\}. $$ The minimum order statistic is $x_{(1)} = 2.24082$, and the maximum is $x_{(20)} = 5.21372$. These comprise your maximum likelihood estimators for the endpoints of the uniform distribution from which the sample was generated. In fact, the true parameters I used to generate the sample was $$\theta_1 = 2.2357, \quad \theta_2 = 5.2537.$$ So you can see that it's not too far off.
Are these estimators biased or unbiased? If biased, can we construct an unbiased estimator, and what is the asymptotic bias?