Let $X$ be a Binomial random variable $\sim B(p, n)$. Show that for $\lambda > 0$ and $\epsilon > 0$, $P(X - np > n\epsilon) \le \mathbb{E}\{\displaystyle e^{\lambda(X - np - n\epsilon)}\}$
I tried like this:
Let $A = \{ \omega \in \Omega: X(\omega) > n(\epsilon + p)\}$. Then note that
$$e^{\lambda(X(\omega) - np - n\epsilon)} \ge 1_A(\omega)$$
In fact it is obvious if $\omega \notin A$, and if $\omega \in A$ the right hand side is $1$, and the left hand side is raised to a power $> 0$ so it is itself $> 1$.
So takig the mean both sides yields
$$\mathbb{E}\{\displaystyle e^{\lambda(X - np - n\epsilon)}\} \ge P(A) = P(X - np > n\epsilon)$$ which is what I wanted to prove.
(Because $\mathbb{E}\{1_A\} = P(A)$)
Something has to be wrong though, because I used nowhere the fact that $X$ is a binomial random variable!
What's wrong with it?
You've proven a specific case of what Wikipedia calls the exponential version of Chebyshev's inequality. Note you can pull a $e^{-\lambda n\epsilon}$ out of your expected value on the right hand side; you're equivalently showing: $$ P(X - np > n\epsilon) \le e^{-\lambda n\epsilon} E[e^{\lambda(X-np)}], $$ which is the form given on Wikipedia (and other references, of course) where the RV you're acting on is $Z=X-np$, $\lambda = t$, and your epsilon is $n\epsilon$. This inequality doesn't require that $X$ be binomial, so there's nothing wrong, even though you didn't use that fact.