Exercise.
Let $X_1,X_2,...,X_n (n\ge 2)$ be $n$ sample from exponential distribution $\textrm{Exp}(\frac{1}{\theta})$,where $\theta=\mathbb{E}X.$Show that $\bar{X}$ is the UMVUE of $\theta.$
Solution.
Let $\mathcal{U}_{0}$ be the class of all unbiased estimators $\varphi$ of 0, that is, $\mathcal{U}_{0}:=\left\{\varphi:\mathbb{E}_\theta\varphi=0,\mathbb{E}_\theta\varphi^{2}<\infty,\text{for all} \quad\theta\in \Theta \right\},$ where $ \Theta=\left ( 0,\infty \right ).$ $$\mathbb{E}_\theta\varphi=\int_{0}^{\infty}\cdots\int_{0}^{\infty}\varphi(x_1,x_2,\cdots,x_{n})\cdot\prod_{i=1}^{n}\left\{ \frac{1}{\theta} \exp\left (-\frac{x_i}{\theta} \right )\right\}dx_1dx_2\cdots dx_n=0.$$ ${\color{Red}{\text{Taking the derivative with respect to }}}$${\color{Red}\theta}$ ${\color{Red}{\text{from above equation}}}$, we have $$\int_{0}^{\infty}\cdots\int_{0}^{\infty}\frac{n\bar{x}}{\theta^2}\cdot\varphi(x_1,x_2,\cdots,x_{n})\cdot \exp\left(-\frac{x_1+x_2+\cdots+x_n}{\theta}\right)dx_1dx_2\cdots dx_n=0.$$ Then $$\text{Cov}_{\theta}\left (\bar{X}, \varphi\right )=\mathbb{E}_\theta \left ( \bar{X}\cdot\varphi \right )=0,\forall \varphi\in \mathcal{U}_{0},\forall \theta\in \Theta. $$ Therefore, $\bar{X}$ is the UMVUE of $\theta.$
Why can we directly exchange integral and differentiation? This needs some rigorous explanations. Is there an integrable $Y(x_1,x_2,\cdots,x_n)$ such that $$\left|\frac{n\bar{x}}{\theta^2}\cdot\varphi(x_1,x_2,\cdots,x_{n})\cdot \exp\left(-\frac{x_1+x_2+\cdots+x_n}{\theta}\right) \right|\le Y $$ almost surely for all $\theta\in \Theta=\left ( 0,\infty \right ) ?$
Indeed we just need $\varphi $ integrable. We have an expression of the form $$ \int_{[0,\infty )^n} \varphi (\mathbf{x})\theta ^{-n}e^{-\frac{\mathbf{x}\cdot \mathbf{1}}{\theta }}\,d \mathbf{x}=\theta ^{-n}\int_{[0,\infty )^n} \varphi (\mathbf{x})e^{-\frac{\mathbf{x}\cdot \mathbf{1}}{\theta }}\,d \mathbf{x}\tag1 $$ for $\mathbf{x}:=(x_1,\ldots,x_n )$ and $\mathbf{1}:=(1,\ldots ,1)$. Also, without lose of generality, we can assume that $\varphi \geqslant 0$ and that $\mu(d\mathbf{x}):=\varphi (\mathbf{x})\,d \mathbf{x}$ is a probability measure. Therefore, by Tonelli's theorem, the question is reduced to show that $$ \frac{\partial}{\partial \theta }\int_{[0,\infty )} e^{-x/\theta }\mu(d x)=\int_{[0,\infty )} \frac{\partial}{\partial \theta }e^{-x/\theta }\mu(dx)\tag2 $$ Now let $f$ be the integrand of the LHS of (2), then for any neighborhood $U:=(\theta _0-\epsilon ,\theta _0+\epsilon)$ of $\theta _0$ where $\epsilon \in(0,\theta _0/2)$ by the mean value theorem we have that $$ \left| \frac{f(x,\theta_0 +h)-f(x,\theta_0 )}{h} \right|\leqslant \sup_{\theta \in U}\left|\frac{\partial}{\partial \theta }f(x,\theta)\right|\leqslant g(x)\tag3 $$ for $g(x):=(\theta _0-\epsilon )^{-2} xe^{-x/(\theta _0+\epsilon )}$ and all $|h|\leqslant \epsilon $. As $g$ doesn't depend on $\theta $ and its integrable, we can apply the dominated convergence theorem to show that (2) holds.∎