Suppose a statistical model comprises all continuous distributions on $R^1$. Based on $n$ samples $x_i$, find a UMVU estimtor for $P((0,1))$, where $P$ is the true distribution.
I have three questions:
In textbooks, I learned that a statistical model is denoted as $P_\theta$, indexed by $\theta\in \Omega$. Usually, $\theta$ is a scalar or vector value. In this question should I understand $\theta$ as an index for (uncountable) continuous distributions?
In this case ($\theta$ is not a value), do we have something like MLE for inferring $\theta$?
Intuitively, the solution to the question is $\frac{\sum_i 1(x_i\in (0,1))}{n}$. I can compute the variance ($\frac{p(1-p)}{n}$, where $p=P((0,1)$). How can I prove it's UMVU?
The problem can be reduced to the two events $x\in(0,1)$ and $x\not\in(0,1)$, so you're effectively looking for a UMVU estimator for a Bernoulli variable. This is indeed the one you proposed, and it's UMVU by the Lehmann–Scheffé theorem because the number of successes is a complete and sufficient statistic for a Bernoulli variable.