Find the maximum likelihood estimator for $\theta$ and $\delta$ considering a random sample of size $n$ from a population with distribution $$f(x)=\frac{1}{\theta}e^\frac{-(x-\delta)}{\theta},\quad x>\delta$$
I know $L(\theta,\delta)=\prod_{i=1}^n \frac 1 \theta e^\frac{-(x-\delta)} \theta$. When I simplify, I get $$L(\theta,\delta)=\frac 1 {\theta^n}e^{-\frac 1 \theta \sum_{i=1}^n x_i}e^{\frac{n\delta} \theta}$$ Taking the natural log of this function, I get $$\ell(\theta,\delta)=-n\ln\theta - \frac 1 \theta \sum_{i=1}^n x_i+\frac{n\delta} \theta$$
Now, taking the partial derivatives for $\ell$ with respect to $\theta$ and $\delta$, I have the following:
$$\frac{d\ell}{d\theta}=-\frac n \theta +\frac 1 {\theta^2}\sum_{i=1}^n x_i-\frac{n\delta}{\theta^2}$$
$$\frac{d\ell}{d\delta}=\frac n \theta$$ Setting these both equal to zero and with some manipulation, I found $\hat\delta=\bar x$ and $\hat\theta=0$, but this does not make sense, because an estimator cannot be $0$. I have seen other examples where similar issues arise, but I'm not sure how I should look at this in order to actually figure out the value of $\theta$ that maximizes the function.
If you look at your formula for $\log L(\theta,\delta)$, you'll see that $\log L$ is increasing in $\delta$: the bigger the $\delta$ the more likely-looking things seem to be. But the range restriction on $x$ means that $\hat \delta$ must be $<$ than all your observations, and that $\hat \delta=\min_i x_i$. Now look at $\partial/\partial \theta \log L$ and solve for $\hat \theta$.
In a nutshell: the restriction $x>\delta$ means the formula you state for $\log L$ is not valid for all values of the parameters, but your derivative calculation ignored this fact.