In a standard Bernoulli experiment with some constant probability of failure $p$ one can estimate the number of repetitions necessary to get at least one success to some other arbitrary desired probability of failure $\epsilon$. In other words, by repeating the experiment $O(\log(1/\epsilon))$, one can shrink the probability of failure to get at least one success from $p$ to $\epsilon$.
Now, consider a similar but different scenario. Say, I have a procedure to estimate some real number $a$ up to some additive error $\eta$ which has probability of failure $p<1/2$, i.e. in every round of an experiment, the probability of the estimate $a'$ to fall into the interval $[a-\eta, a+\eta]$ is $1-p$ but with probability $p$ my estimate $a'$ falls outside.
I want to prove that again $O(\log (1/\epsilon))$ repetitions of this procedure are enough to decrease the probability of failure to any arbitrary $\epsilon$. However, in comparison to the above, I do not know which of the rounds succeeded and which failed. Intuitively, I should take the mean or maybe the median of all the estimates $a'_i$ but how to prove that the scaling with $O(\log (1/\epsilon))$ works out?
The median works
Suppose that, after $n$ trials, the median of the $a_i'$ falls outside of the interval $[a-\eta,a+\eta]$. WLOG the median is greater than $a+\eta$. Since half of the $a_i'$ are greater than or equal to the median, at least half of these $n$ trials have been failures.
So an upper bound of the probability of the median falling outisde the interval is $F(n,\lceil\frac{n}2 \rceil,p)$, the probability of at least $\lceil\frac{n}2 \rceil$ failures after $n$ trials of a Bernoilli process with failure rate $p$.
In fact, if there is a half chance that the $a_i'$ are unifomly distributed in $[a-\eta,a+2\eta]$ and a half chance that the $a_i'$ are uniformly distributed in $[a-2\eta,a+\eta]$, then this bound is exact.
The asymptotic behaviour of $F(n,k,p)$ is given by Hoeffding's inequality:
$F(n,k,p)\le \exp\left( -2 \frac{(np-k)^2}{n} \right) \quad$ (1)
We want to prove that there exists a $c$ such that $F(n,\lceil\frac{n}2 \rceil,p)\le \epsilon$ whenever $n>c\log\frac{1}{\epsilon}$. Using (1) we have that
$F(n,\lceil\frac{n}2 \rceil,p)\le \exp\left[ \frac{-2\left(c\log\left(\frac{1}{\epsilon}\right)(p-\frac{1}{2})\right)^2}{c\log\frac{1}{\epsilon}}\right]=\epsilon^{2c(p-\frac{1}{2})^2}$.
So $c=\frac{1}{2(p-\frac{1}{2})^2}$ will do.
Why the $p<\frac{1}{2}$ condition is required.
Suppose that $a'=a$ with probability $\frac{1}{2}$, $a'=b$ with probability $\frac{1}{2}$, and $b$ is outside the interval $[a-\eta,a+\eta]$.
Then the value of $a$ can be narrowed down to one of two numbers, but no matter how many times the experiment is repeated it is impossible to guess which of the two numbers is $a$ and which one is $b$ without being wrong half the time.