Let $X_1,X_2,..X_n$ be a random sample from $f_{\theta}(x)=e^{-(x-\theta)}$ , where $x> \theta$ and $\theta>0$, show that for any unbiased estimator $T$ of $\theta$, $V(T) \ge \frac{\theta^2}{e^{\theta}n^2}$.
Since , the range is parameter dependent , we cannot use Cramer-Rao Lower Bound here. So, I was thinking of applying the Chapman - Robbins bound.But the expression is not coming as expected. Can anyone help?
I obtained that the Chapman - Robbins bound does not depend on $\theta$ at all. It is $$ V(T)\geq \sup_{\Delta>0} \frac{\Delta^2}{e^{n\Delta}-1}\approx \frac{0.64761}{n^2}. $$ Note that $$ \sup_{\Delta>0} \frac{n^2\Delta^2}{e^{n\Delta}-1} = \sup_{x>0} \frac{x^2}{e^x-1}\geq \sup_{x>0} \frac{x^2}{e^x} \geq \frac{\theta^2}{e^\theta}$$ which implies that $$ V(T)\geq \sup_{\Delta>0} \frac{\Delta^2}{e^{n\Delta}-1} \geq \frac{1}{n^2}\frac{\theta^2}{e^\theta}. $$
Nevertheless, although this inequality is true, it is extremely strange. The parameter $\theta$ is a shift parameter, and it is natural to expect a lower bound for the variance independent of $\theta$. Say, for $T(X)=X_{(1)}-\frac1n$ the variance is $V(T)=\frac1{n^2}$. Which, in fact, gives the Chapman-Robbins inequality. It is not clear why it should be degraded and the dependence on the parameter artificially introduced.
I think that there exists some other inequality on the variance of unbiased estimates for non-regular families, which directly leads to this bound without spoiling the Chapman-Robbins bound.