Suppose that I draw $n$ points uniformly at random from $\mathcal{U}(-a,a)$ for some $a\in\mathbb{R}_+$. Denote this set of points $\{X_i\}_{i=1}^n$. Now for some $-a < x_1 < x_2 < a$, let a function $f:[-a,a] \to \{0,1\}$ according to the rule, $$ f(x) = \begin{cases}0 ~~~~~\text{if }x < x_1 \text{or } x > x_2 \\ 1 ~~~~~\text{otherwise.}\end{cases} $$ Thus, $f$ partitions $[-a,a]$ into three intervals. Denote these three regions $R_1, R_2$ and $R_3$ so that $\sup R_1=x_1$ and $\sup R_2 = x_2$. Now, assuming that I have at least one point in each interval, I form an estimator of $x_1$ by taking the largest point in $R_1$ (denoted $y_1$) and the smallest point in $R_2$ (denoted $y_2 > y_1$), and computing $\hat{x}_1=\frac{y_2 - y_1}{2}$.
I am unsure how to begin to show that this estimator is unbiased (or even if it unbiased). Can anyone help me get started?
Let's simplify to a single threshold $\theta \in [-a,a]$ and $f(x) = 0$ if $x<\theta$, $f(x)=1$ else. Suppose it is given that you sample $n$ times and you get $k_1$ samples to the left and $k_2$ samples to the right, where $k_1+k_2=n$ and $k_1\geq 1$, $k_2 \geq 1$. Let $Y_1$ be the max of those samples in the left region, and $Y_2$ is the min of those samples in the right. Your estimator is $\hat{\theta} = (Y_1+Y_2)/2$. So you can compute:
\begin{align} E[\hat{\theta}|k_1, k_2] &= \frac{1}{2}E[Y_1] + \frac{1}{2}E[Y_2] \\ &= \frac{1}{2}\left(\theta - \left(\frac{\theta + a}{k_1+1}\right) \right) + \frac{1}{2}\left(\theta + \left(\frac{a-\theta}{k_2+1}\right)\right)\\ &=\theta -\frac{(\theta + a)}{2(k_1+1)} + \frac{(a-\theta)}{2(k_2+1)} \end{align}
This is generally biased, but you can see the bias goes to zero as $k_1\rightarrow\infty$ and $k_2\rightarrow\infty$.
If you use an estimator $\hat{\theta} = \frac{(k_1+1) Y_1 + (k_2+1) Y_2}{k_1+k_2+2}$ you get:
$$ E[\hat{\theta} |k_1,k_2] = \theta - \left(\frac{2\theta}{k_1+k_2+2}\right) $$
Define $Y_1, Y_2$ as before, and define $Y_{max}$ and $Y_{min}$ as the max and min values sampled. You can define a new estimator as: $$ \hat{\theta} = (Y_1+Y_2)/2 + (Y_{max} + Y_{min})/2 $$ Given that $k_1+k_2=n$ and $k_1\geq 1$, $k_2 \geq 1$ we get: $$ E[\hat{\theta}|k_1,k_2] = \theta $$ So this new estimator is unbiased, but it might be less accurate than the original estimator $(Y_1+Y_2)/2$.