Suppose we have $X_1,\dots,X_n$ i.i.d. observations with parameter $\alpha$ $(0<\alpha<1)$. The distribution function is given by $p_{\alpha}(x)=\alpha$ if $x=1$ and $p_{\alpha}(x)=1-\alpha$ if $x=2$. I am asked to find an estimator that is a function of $X_1$ for $\alpha$ and then find a more efficient estimator using the Rao-Blackwell theorem.
I think an estimator is $2-X_1$ because the expectation of this is $2-E(X_1)=2-(\alpha + 2(1-\alpha))=\alpha.$
Now, to find a sufficient statistic I calculated the likelihood function, which is $\prod^n_{i=1}p_{\alpha}(x_i)=\alpha^{2n-(\sum^nx_i)}(1-\alpha)^{(\sum^nx_i)-n}$. So the sufficient statistic is $\sum^n_{i=1}x_i$.
The Rao-Blackwell part: The better estimator will be $$E(2-X_1|\sum_{i=1}^nX_i)=\frac{P(X_1=1)P(\sum_{i=2}^nX_i=t-1)}{P(\sum_{i=1}^nX_i=t)}$$
Since the right handside doesn't depend on $\alpha$ (because $\sum_{i=1}^nX_i$ is a sufficient statistic) it is enough to compute the ways a number can be written as a sum of $1$'s and $2$'s, which gives us the Fibonacci numbers. So the efficient estimator is the ratio of two consecutive Fibonacci numbers.
Is my solution correct? I doubt myself because there wasn't many numerical data given in the problem but I estimated the parameter with a number.
Your answer is right, but you can go further with it and get a familiar form for the Rao-Blackwell estimator.
We can re-parameterize as $Y = X-1$, then $P[Y=0] = \alpha$ and $P[Y = 1] = 1-\alpha$.
Then we can define a statistic $T = \sum_{i=1}^n Y_i$.
Using the same base estimator as in your answer, we have: $\hat{\alpha} = 1-Y_1$, which gives us that $E[\hat{\alpha}] = \alpha$.
Taking $E[1-Y_1|T=t] = \frac{E[(1-Y_1)I_{T=t}]}{P[T=t]}$.
Note that we have the indicator $I_{T=t}$ variable, and $(1-Y_1)$ can be thought of as an indicator variable as well. The only situation in which both of these indicators are non-zero is when $Y_1 = 0$ and $\sum_{i=2}^n Y_i = t$.
These are independent events, thus we can factor the expectation into:
\begin{equation} \begin{split} E[1-Y_1|T=t] &= \frac{E[(1-Y_1)I_{T=t}]}{P[T=t]}\\ & = \frac{E[I_{(Y_1 = 0)}I_{(T_{2}^n=t)}]}{P[T=t]}\\ & = \frac{E[I_{(Y_1 = 0)}]E[I_{(T_{2}^n=t)}]}{P[T=t]}\\ & = \frac{P[Y_1 = 0]P[T_{2}^n=t]}{P[T=t]}\\ & = \frac{\alpha \binom{n-1}{t}\alpha^{n-1-t}(1-\alpha)^t}{\binom{n}{t}\alpha^{n-t}(1-\alpha)^t}\\ & = \frac{\binom{n-1}{t}}{\binom{n}{t}} = \frac{(n-1)!t!(n-t)!}{n!t!(n-1-t)!}\\ & = \frac{n-t}{n} = 1-\bar{X} \end{split} \end{equation}