Edit
(As Robert pointed out, what I was trying to prove is incorrect. So now I ask the right question here, to avoid duplicate question)
For infinite independent Bernoulli trials with probability $p$ to success, define a random variable N which equals to the number of successful trial. Intuitively, we know if $p > 0$, $\Pr \{N < \infty \} = 0$, in other word $N \rightarrow \infty$. But I got stuck when I try to prove it mathematically.
\begin{aligned} \Pr \{ N < \infty \} & = \Pr \{ \cup_{n=1}^{\infty} [N \le n] \} \\ & = \lim_{n \rightarrow \infty} \Pr \{ N \le n \} \\ & = \lim_{n \rightarrow \infty}\sum_{i=1}^{n} b(i; \infty, p) \\ & = \sum_{i=1}^{\infty} b(i; \infty, p) \\ \end{aligned}
I've totally no idea how to calculate the last expression.
(Original Question)
For infinite independent Bernoulli trials with probability $p$ to success, define a random variable N which equals to the number of successful trial. Can we prove that $\Pr \{N < \infty \} = 1$ by:
\begin{aligned} \Pr \{ N < \infty \} & = \Pr \{ \cup_{n=1}^{\infty} [N \le n] \} \\ & = \lim_{n \rightarrow \infty} \Pr \{ N \le n \} \\ & = \lim_{n \rightarrow \infty}\sum_{i=1}^{n} b(i; \infty, p) \\ & = \sum_{i=1}^{\infty} b(i; \infty, p) \\ & = \lim_{m \rightarrow \infty}\sum_{i=1}^{m} b(i; m, p) \\ & = \lim_{m \rightarrow \infty}[p + (1 - p)]^m \\ & = \lim_{m \rightarrow \infty} 1^m \\ & = 1 \end{aligned}
I know there must be some mistake in the process because if $p = 1$, N must infinite. So the equation only holds when $ p < 1 $. Which step is wrong?
You want to compute the probability of $s$ successes for $s = 0, 1, 2, \ldots$. Here the crucial point is that $s$ is fixed first, and then you compute the probability that you get $s$ successes when you throw infinitely many coins (each of success probability $p$). In other words, we want $$ \lim_{m \to \infty} b(s; m, p) = \lim_{m \to \infty} \binom{m}{s} p^s (1-p)^{m-s} = (\frac{p}{1-p})^s \lim_{m \to \infty} \binom{m}{s} (1-p)^m. $$ You can intuitively see that this answer should come out to be $0$ (since you are throwing infinitely many coins). How can we justify that rigorously? By upper bounding the function of $m$ suitably, and then using the sandwich theorem.
When $s$ is fixed, the first term $\binom{m}{s}$ is at most a polynomial in $s$, since we can upper bound it loosely by $\binom{m}{s} \leq m^s$. On the other hand, $(1-p)^m$ goes to zero exponentially fast. Can you use this to finish the proof?