I am looking for an interval $J = [a, b]$ (for $0\leq a,b\leq n$) such that the random variable $0\leq X\leq n$ drawn from a binomial distribution with success probability of $p$ lies within $J$ with probability at least $1-\epsilon$, for a given small $\epsilon$.
I know this might seem easy but I realized that I have forgotten it when I faced it in one part of a problem I'm working on (this is not a homework problem. I'm solving a research problem and faced it and had difficulty remembering stuff).
I know that I can apply Chernoff bound for the upper tail and use Stirling's formula for the lower tail but the simplifying seems tedious. I'm pretty sure that this has been well studied but I don't know the keyword to search for. I did try "confidence interval" but then realized it refers to something else in this context.
Does anyone have any ideas?
Appreciated
The actual answers have no closed form. Concentration inequalities like the results you mention are approximations that give a slightly looser bound, in exchange for simplifying the calculations. It's a question of where in the quality-to-simplicity tradeoff you want to be.
I'm a fan of Hoeffding's inequality, which applies to $\mathrm{Binomial}(n,p)$ and more generally to sums of $n$ independent random variables from $[0,1]$. It says that if the sum has mean $\mu$, then it will be in the interval $[\mu-nt, \mu+nt]$ with probability at least $1-\epsilon = 1-2 e^{-2nt^2}$.
Solving for $t$ in terms of $\epsilon$, we get an interval of $\left[\mu - \sqrt{\frac n2 \ln \frac2\epsilon}, \mu + \sqrt{\frac n2 \ln \frac2\epsilon}\right]$ which is not awfully complicated. For the binomial distribution, of course, $\mu = np$. These bounds will generally not be integers, so you can round them to get a slightly tighter interval; for example, an interval of $[19.7, 35.3]$ is really an interval of $[20,35]$.
You might be unhappy with this interval when $p$ is very close to $0$ (or to $1$), because the simplicity tradeoff in Hoeffding's inequality is to have no dependence on $p$ (except to determine $\mu$).
Similarly, using the loosest form of the Chernoff bound we get an interval of $[(1-\delta)\mu, (1+\delta)\mu]$ with $\epsilon = 2e^{-\delta^2\mu/3}$, and for a given $\epsilon$ this is an interval of $\left[\mu - \sqrt{\frac \mu 3 \ln \frac2\epsilon} , \mu + \sqrt{\frac \mu 3 \ln \frac2\epsilon} \right]$. This form of the Chernoff bound is only valid when $\delta \le 1$, or when $\epsilon \ge 2 e^{-\mu/3}$, which means $\mu = np$ cannot be too small. However, the condition "cannot be too small" is much weaker here - in order to be able to find an interval for $\epsilon = 0.01$, you need $\mu \ge 16$, for instance. This, plus the fact that the Chernoff bound has an error term in terms of $\mu$ and not $n$, means it's better for small values of $p$.
When $p$ gets even smaller than that, you probably want to be messing with the Poisson approximation. One thing that's useful is that if $X \sim \mathrm{Binomial}(n,p)$ and $Y \sim \mathrm{Poisson}(np)$, then $\Pr[Y \ge k]$ is an upper bound on $\Pr[X \ge k]$. So the Poisson distribution can be used to find the upper bound on your interval, which is generally the more difficult of the two. A useful inequality for $Y \sim \mathrm{Poisson}(\lambda)$ is $\Pr[Y \ge y] \le e^{y-\lambda} (\frac y\lambda)^{-y}$, which is a Chernoff-type bound on the Poisson distribution - but this is hard to conveniently solve for $\epsilon$.