Given random sample $X_1, X_2, ..., X_n$ with the distribution $$f(x|\theta) = \left \{ \begin{aligned} e^{-(x-\theta)}, \ \ 0 < x < \theta \\ 0, \text{ otherwise.} \end{aligned} \right. $$
where $\theta \in (-\infty, \infty).$ Show that the estimator $\theta_1 = \min\left\{X_1, X_2, ..., X_n \right\} $ is unbiased.
So, I don't even know where to start, should I randomly generate a minimum for a large amount of normally distributed data and then compare it to the minimum of a sample consisting some arbitrary number of variables? I need to find a solution for the problem in R and I cannot even imagine how to do this.
I assume that you mean $X_1, X_2, ..., X_n$ with the distribution
$$f(x|\theta) = \left \{ \begin{aligned} e^{-(x-\theta)}, \ \ \theta <x \\ 0, \text{ otherwise.} \end{aligned} \right. $$ since in that case $f$ is a density function.
define $Y=\min (X_1,\cdots X_n) $
$$F_Y(y)=1-P(Y>y)=1-P(X_1>y,\cdots, X_n>y)=1-e^{-n(y-\theta)}$$ so
$$f_Y(y)=ne^{-n(y-\theta)} \hspace{1cm} \theta < y$$
so $E(Y)=\theta + \frac{1}{n}$.
so $Y$ is baised. By a simple simulation in R we show $Y$ is biased (as you asked).
we do this step $k$ times:
1) draw $n$ independent random sample $(X_1,\cdots X_n)\sim f_X$
2)calculate $Y_k=\min(X_1,\cdots X_n)$
then we have $k$ of $Y_k$. calculate mean of this $Y_k$ and it is done.