Let $X_1,X_2,\ldots,X_n$ be random iid with parameter $\theta$
Suppose $T_1=t_1(s)$ and $ T_2=t_2(s)$ are two unbiased estimators of $\tau(\theta)$ where $s=s(X_1,X_2,\ldots,X_n)$ is a complete and sufficient statistic.
In the text it then went on to say $E(T_1-T_2)=0$ before going on to prove a theorem. It seems very reasonable but how would you show it?
Using law of iterated expectations we see that LHS${}= E(E(T_1-T_2\mid S))$
Is the argument just that for the eexpected difference to be non zero, $T_1$ or $T_2$ would have to be consistently different to the other. if $T_1$ consistently estimates lower or consistently estimated higher than $T_2$, then either $T_1$ or $T_2$ would have to be bias which contradicts the assumption. Is that the reasoning?
Since $T_1$ and $T_2$ are both unbiased for $\tau(\theta)$ we have \begin{align} E_\theta(T_1) = \tau(\theta) \qquad\text{and}\qquad E_\theta(T_2) = \tau(\theta). \end{align} It then follows that $$ E_\theta(T_1-T_2) = E_\theta(T_1) - E_\theta(T_2) = 0. $$