Prove or disprove that the sum is bounded by a constant

159 Views Asked by At

Given $p\in[0,1]$, prove or disprove that the sum $$\sum_{n=k}^\infty\sum_{j=0}^k\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}$$ is bounded by a constant that does not depend on $k$.

The terms $\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}$ do remind me of the binomial expansion. But using that, the most naive estimate is $$\sum_{j=0}^k\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}\leq\sum_{j=0}^n\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}=1$$ And then summing over $n$ still gives $\infty$. How should I make the correct estimate?

p.s. I am not absolutely sure that this holds. There is a proof that I was reading where such an estimate could yield the final result. I am a bit confused now because one comment says this is wrong while an answer seems to have proved this.

3

There are 3 best solutions below

0
On

The sum is bounded.

The sum is $0$ if $p = 0,1$, so it remains to consider when $p \in (0,1)$. We first write the sum as follows: $$ \sum_{n=k}^\infty (1 - p)^{n - k}\sum_{j=0}^k {n \choose j} p^j(1 - p)^{k - j} $$ We want to estimate ${n \choose j}/{k \choose j}$. Writing it out explicitly gives: $$ {n \choose j}/{k \choose j} = \frac{n(n - 1)\cdots(k + 1)}{(n - j)(n - 1- j)\cdots(k + 1 - j)} $$ Since $p > 0$, we can fix $\epsilon > 0$ so that $p - \epsilon > 0$. Now let $N$ be large enough so that: $$ \frac{N}{N - j} \leq \frac{N}{N - k} < \frac{1}{1 - p + \epsilon} $$ Then, for $n \geq N$: \begin{align*} \frac{n(n - 1)\cdots(k + 1)}{(n - j)(n - 1- j)\cdots(k + 1 - j)} &\leq \left(\frac{1}{1 - p + \epsilon}\right)^{n - N}\frac{N(N - 1)\cdots(k + 1)}{(N - j)(N - 1- j)\cdots(k + 1 - j)} \\ &= \left(\frac{1}{1 - p + \epsilon}\right)^{n - N}{N \choose j}/{k \choose j} \end{align*} We write $M := \max_{0 \leq j \leq k}{N \choose j}/{k \choose j}$, so we have that ${n \choose j} \leq \left(\frac{1}{1 - p + \epsilon}\right)^{n - N}M{k \choose j}$. Then, for $n \geq N$, we have that: \begin{align*} (1 - p)^{n - k}\sum_{j=0}^k {n \choose j} p^j(1 - p)^{k - j} &\leq (1 - p)^{n - k}\sum_{j=0}^k \left(\frac{1}{1 - p + \epsilon}\right)^{n - N}M{k \choose j} p^j(1 - p)^{k - j} \\ &= \left(\frac{1 - p}{1 - p + \epsilon}\right)^{n - k}\left(\frac{1}{1 - p + \epsilon}\right)^{k - N}M\underbrace{\sum_{j=0}^k {k \choose j} p^j(1 - p)^{k - j}}_{= 1} \\ &= \left(\frac{1 - p}{1 - p + \epsilon}\right)^{n - k}\underbrace{\left(\frac{1}{1 - p + \epsilon}\right)^{k - N}M}_\text{constant} \end{align*} Thus, for $n \geq N$, we can bound the sum by a geometric series with common ratio $< 1$, so it converges. Since there are only finitely many terms to consider before that (for $k \leq n \leq N - 1$, the series converges.

0
On

For independent $X_{nk}\sim B(n,\,p)$, the sum is $\sum_{n\ge k}P(X_{nk}\le k)$. Let $q:=1-p$, and let $\Phi$ denote the $N(0,\,1)$ CDF. For $n\gg k$,$$P(X_{nk}\le k)\approx\Phi\left(\frac{k-np}{\sqrt{npq}}\right)\approx\Phi\left(-\sqrt{\frac{pn}{q}}\right)\approx\frac{1}{\sqrt{2\pi pn/q}}\exp-\frac{p^2n^2}{2q^2.}$$The sum converges by comparison with $\exp-n$.

0
On

For convenience I will replace $p$ with $x$. In what follows I assume $0<x<1$. It appears that the sum can be exactly evaluated to a very simple form, so that your question can be easily answered.

As all terms are positive we interchange the order of summation to obtain: $$\begin{align} \boxed{\sum_{n=k}^\infty\sum_{j=0}^k\binom nj x^j(1-x)^{n-j}} &=\sum_{j=0}^k\sum_{n=k}^\infty\binom nj x^j(1-x)^{n-j}\\ &=\sum_{j=0}^k\frac{(-x)^j}{j!} \sum_{n=k}^\infty\frac{d^j}{dx^j} (1-x)^{n}\\ &=\sum_{j=0}^k\frac{(-x)^j}{j!}\frac{d^j}{dx^j}\sum_{n=k}^\infty(1-x)^{n}\\ &=\sum_{j=0}^k\frac{(-x)^j}{j!}\frac{d^j}{dx^j}\frac{(1-x)^k}x\\ &=\sum_{j=0}^k\frac{(-x)^j}{j!}\frac{d^j}{dx^j}\left[\frac1x-\sum_{i=1}^k\binom ki(-x)^{i-1}\right]\\ &=\sum_{j=0}^k\frac{(-x)^j}{j!}\left[-\frac{j!}{(-x)^{j+1}}-\sum_{i=1}^k\binom ki \frac{(-1)^j(i-1)!}{(i-j-1)!}(-x)^{i-j-1}\right]\\ &=\sum_{j=0}^k\left[\frac1x-\sum_{i=1}^k(-1)^j\binom ki \binom{i-1}j(-x)^{i-1}\right]\\ &=\boxed{\frac{k+1}x-k.}\tag1 \end{align} $$

From the expression (1) one can draw the following conclusions:

  1. The sum diverges as $x\to0^+$ for all $k$.
  2. The sum diverges as $k\to\infty$ for all $x$.