I have a series of N Bernoulli tests (p, 1-p).
I need to calculate a probability of passing more than N/2 tests, depending on N and p.
The obvious solution is Chernoff bound: $\varepsilon \leq 2^{-N(p-\frac{1}{2})^2}$, but this is not sufficient for me. I actually need some stronger dependency on p. Is there any available?
I tried fitting Hoeffding, Azuma and Bernstein's inequalities, but it looks like all of these also do not give any sufficient dependency on p.
Is there any convenient estimation?
What I need is something like: $\varepsilon \leq 2^{-N*p}$
Did you try the relative-entropy version of Chernoff's bound?
Given $n$ samples, define $\hat p$ to be the empirical probability (i.e. successes divided by total trials). Then for any $q \in [0,1]$ we have
$$ P(\hat p \geq q) < \exp(-n * RE(q, p)) $$ where the relative entropy $RE(q,p)$ is defined $$ RE(q,p) = q \ln \frac{q}{p} + (1-q) \ln \frac{1-q}{1-p} $$
The same bound holds for $P(\hat p \leq q)$. In your case you would be interested in $q = 1/2$.