Given $p\in[0,1]$, prove or disprove that the sum $$\sum_{n=k}^\infty\sum_{j=0}^k\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}$$ is bounded by a constant that does not depend on $k$.
The terms $\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}$ do remind me of the binomial expansion. But using that, the most naive estimate is $$\sum_{j=0}^k\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}\leq\sum_{j=0}^n\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}=1$$ And then summing over $n$ still gives $\infty$. How should I make the correct estimate?
p.s. I am not absolutely sure that this holds. There is a proof that I was reading where such an estimate could yield the final result. I am a bit confused now because one comment says this is wrong while an answer seems to have proved this.
The sum is bounded.
The sum is $0$ if $p = 0,1$, so it remains to consider when $p \in (0,1)$. We first write the sum as follows: $$ \sum_{n=k}^\infty (1 - p)^{n - k}\sum_{j=0}^k {n \choose j} p^j(1 - p)^{k - j} $$ We want to estimate ${n \choose j}/{k \choose j}$. Writing it out explicitly gives: $$ {n \choose j}/{k \choose j} = \frac{n(n - 1)\cdots(k + 1)}{(n - j)(n - 1- j)\cdots(k + 1 - j)} $$ Since $p > 0$, we can fix $\epsilon > 0$ so that $p - \epsilon > 0$. Now let $N$ be large enough so that: $$ \frac{N}{N - j} \leq \frac{N}{N - k} < \frac{1}{1 - p + \epsilon} $$ Then, for $n \geq N$: \begin{align*} \frac{n(n - 1)\cdots(k + 1)}{(n - j)(n - 1- j)\cdots(k + 1 - j)} &\leq \left(\frac{1}{1 - p + \epsilon}\right)^{n - N}\frac{N(N - 1)\cdots(k + 1)}{(N - j)(N - 1- j)\cdots(k + 1 - j)} \\ &= \left(\frac{1}{1 - p + \epsilon}\right)^{n - N}{N \choose j}/{k \choose j} \end{align*} We write $M := \max_{0 \leq j \leq k}{N \choose j}/{k \choose j}$, so we have that ${n \choose j} \leq \left(\frac{1}{1 - p + \epsilon}\right)^{n - N}M{k \choose j}$. Then, for $n \geq N$, we have that: \begin{align*} (1 - p)^{n - k}\sum_{j=0}^k {n \choose j} p^j(1 - p)^{k - j} &\leq (1 - p)^{n - k}\sum_{j=0}^k \left(\frac{1}{1 - p + \epsilon}\right)^{n - N}M{k \choose j} p^j(1 - p)^{k - j} \\ &= \left(\frac{1 - p}{1 - p + \epsilon}\right)^{n - k}\left(\frac{1}{1 - p + \epsilon}\right)^{k - N}M\underbrace{\sum_{j=0}^k {k \choose j} p^j(1 - p)^{k - j}}_{= 1} \\ &= \left(\frac{1 - p}{1 - p + \epsilon}\right)^{n - k}\underbrace{\left(\frac{1}{1 - p + \epsilon}\right)^{k - N}M}_\text{constant} \end{align*} Thus, for $n \geq N$, we can bound the sum by a geometric series with common ratio $< 1$, so it converges. Since there are only finitely many terms to consider before that (for $k \leq n \leq N - 1$, the series converges.