I have been reviewing this great book and I think I found an error not listed in the errata. In example 7.3.21, the authors finish off with the argument that $X-\frac{1}{2}$ is not the best unbiased estimator for $\theta$ for an uniform$(\theta,\theta+1)$ distribution. In fact even $X-\frac{1}{2}+\frac{sin(2\pi X)}{2\pi}$ is better since its is unbiased AND has variance .071. But I think that this last estimator has variance $\frac{1}{12}+\frac{1}{8\pi^2}-\frac{cos(2\pi\theta)}{2\pi^2}$ and hence the new estimator is better only when $\frac{1}{8\pi^2}-\frac{cos(2\pi\theta)}{2\pi^2}<0$
I apologize in advance if this question has already been asked.
For those who not have the book, a brief summary: let $X \sim \text{uniform}(\theta, \theta + 1)$. Then its expected value is $\theta + \dfrac{1}{2}$ and thus one unbiased estimator of $\theta$ is $X - \dfrac{1}{2}$, which has variance $\dfrac{1}{12}$. In particular, for this distribution, unbiased estimators of zero are periodic functions $h$ with period $1$ by the Fundamental Theorem of Calculus, and using the fact that we desire that $$\int_{\theta}^{\theta + 1}h(x)\text{ d}x = 0 \text{ for all }\theta\text{.}$$ One such function is $h(x) = \sin(2\pi x)$.
The authors demonstrate that $\text{Cov}\left(X - \dfrac{1}{2}, h(X)\right) = -\dfrac{\cos(2\pi\theta)}{2\pi}$, and because $X - \dfrac{1}{2}$ is correlated with an unbiased estimator of $0$, it cannot be a best unbiased estimator of $\theta$.
Now, let's consider $$Y = X - \dfrac{1}{2} + \dfrac{\sin(2\pi X)}{2\pi}\text{.}$$ From the prior discussion, $Y$ is obviously unbiased. For the variance, we have that \begin{align} \text{Var}(Y) &= \text{Var}\left(X - \dfrac{1}{2} + \dfrac{\sin(2\pi X)}{2\pi} \right) \\ &= \text{Var}\left(X + \dfrac{\sin(2\pi X)}{2\pi}\right) \\ &= \text{Var}(X) + \dfrac{1}{4\pi^2}\text{Var}\left(\sin(2\pi X) \right) + 2\cdot \text{Cov}\left(X, \dfrac{\sin(2\pi X)}{2\pi}\right) \\ &= \dfrac{1}{12} + \dfrac{1}{4\pi^2}\text{Var}\left(\sin(2\pi X) \right) + \dfrac{2}{2\pi}\cdot \text{Cov}(X, h(X)) \\ &= \dfrac{1}{12} + \dfrac{1}{4\pi^2}\text{Var}\left(\sin(2\pi X) \right) + \dfrac{1}{\pi} \cdot \left[-\dfrac{\cos(2\pi \theta)}{2\pi}\right] \\ &= \dfrac{1}{12} + \dfrac{1}{4\pi^2}\text{Var}(\sin(2\pi X)) - \dfrac{1}{2\pi^2}\cos(2\pi\theta)\text{.} \end{align} Now using the fact that $\sin^2(2\pi X)$ is an unbiased estimator of $0$, $$\text{Var}(\sin(2\pi X)) = \mathbb{E}\left[\sin^2(2\pi X) \right] = \int_{\theta}^{\theta + 1}\sin^2(2\pi x)\text{ d}x = \dfrac{1}{2}$$ hence we have $$\text{Var}(Y) = \dfrac{1}{12} + \dfrac{1}{8\pi^2} - \dfrac{1}{2\pi^2}\cos(2\pi\theta)\text{.}$$ There is a bit of nuance that must be pointed out from the textbook:
The authors do not claim that $Y$ is a best unbiased estimator for $\theta$. Rather, we can see that via WolframAlpha:
which, as pointed out in the original question, only occurs for certain values of $\theta$, and not over all $\theta \in \mathbb{R}$.
However, what this does show is that $X - \dfrac{1}{2}$ cannot be a best unbiased estimator of $\theta$, because it must have the uniformly smallest variance out of all unbiased estimators of $\theta$.
Thus, there is no typo to correct in the book.