Technique to calculate expectation/summation

79 Views Asked by At

I have the following summation: $$\sum_{v=0}^{v=n} {n \choose v} (A)^v (\frac{1}{B+v}) $$ How do I calculate it? I know we can use the binomial theorem directly if we do not have the $\frac{1}{B+v}$ term. I can also use integration as a crude way to get the summation if I do not have the $n \choose v$ term. But how do I get the summation with both of them? Would differentiating the binomial theorem or something like that help?

For a bit of background, this summation actually comes from me trying to find the expectation of a random variable X. The probability mass of a specific point $\frac{1}{B+v}$ for that random variable is ${n \choose v} \cdot A^v$. I would also be happy if there is some way to calculate this expectation without going into the summation. I mean some method like moment generating function (which I tried but couldn't apply) which can maybe circumvent this.

I also apologize if something is wrong with my post -this is my first post in stack exchange and I am ready to correct any errors that I may have made.

Thanks!

2

There are 2 best solutions below

1
On BEST ANSWER

According to wolfram Alpha, $$S={\frac 1B}\text{ }_2F_1(B,-n;B+1;-A)$$ where: $$\text{ }_2F_1(-a,b;c;z)=\sum_{k=0}^\infty(-1)^k{a\choose k}\frac{(b)_k}{(c)_k}z^k$$ where $a,b$ one of them is negative (in this case I included the sign). In its typical form your sum would be written as: $$S=\frac 1B\sum_{k=0}^\infty(-1)^k\frac{A^k(B)_k(-n)_k}{k!(B+1)_k}$$ $$(x)_k=\prod_{j=0}^{k-1}(x-j)$$ yeah this doesn't look particularly nice. we can try and break it apart a little bit since: $$\frac{(B)_k}{(B+1)_k}=\frac{B(B-1)(B-2)...(B-k+1)}{(B+1)(B)(B-1)...(B-k+2)}=\frac{B-k+1}{B+1}=1-\frac{k}{B+1}$$ $$(-n)_k=(-1)^k\frac{(n+k-1)!}{(n-1)!}$$ and so we have: $$S=\frac 1B\sum_{k=0}^\infty\frac{A^k(n+k-1)!}{k!(n-1)!}\left(1-\frac{k}{B+1}\right)$$ I'm struggling to get this back to your original formula but hope this helps

2
On

This isn't a complete answer, just a record of what happens if you do some of the "usual" moves. With a denominator like that you want to integrate the binomial theorem. We have

$$x^{b-1} (1 + ax)^n = \sum_{v=0}^n {n \choose v} a^v x^{b+v-1}$$

and integrating gives

$$\boxed{ I(b, n) = \sum_{v=0}^n {n \choose v} \frac{a^v}{b+v} = \int_0^1 x^{b-1} (1 + ax)^n \, dx }$$

(with the dependence on $a$ suppressed). This integral is a variant of the Beta function, which it would reduce to if we had $a = -1$.


Edit 1: If an estimate is enough, it depends on how big $a, b, n$ are relative to each other but here are some things you can say. For $b > 0$ we have

$$\frac{1}{b + n} \le \frac{1}{b + v} \le \frac{1}{b}$$

which gives

$$\frac{(a + 1)^n}{b + n} \le I(b, n) \le \frac{(a + 1)^n}{b}.$$

If $n$ is small, especially small compared to $b$, this is already quite good. But I imagine in your application $n$ is large. These bounds at least pin down the asymptotic growth as $n \to \infty$ up to a factor of $O(n)$. But I also imagine you really want to estimate the expected value which is given by dividing by $(a + 1)^n$. So let's look at

$$J(b, n) = \int_0^1 x^{b-1} \left( \frac{ax + 1}{a + 1} \right)^n \, dx.$$

The integrand is strictly increasing on the interval $[0, 1]$ and reaches a maximum of $1$ at $x = 1$, which suggests the change of coordinates $y = 1 - x$. This gives

$$J(b, n) = \int_0^1 (1 - y)^{b-1} \left( \frac{(a+1) - ay}{a+1} \right)^n \, dy = \int_0^1 (1 - y)^{b-1} \left( 1 - \frac{ay}{a+1} \right)^n \, dy.$$

We get the lower bound $\frac{1}{b+n} \le J(b, n)$ (equivalent to our previous lower bound for $I$) by writing $1 - y \le 1 - \frac{ay}{a+1}$ and integrating. We get the upper bound $J(b, n) \le \frac{1}{b}$ (equivalent to our previous upper bound for $I$) by ignoring the second factor and replacing it by $1$, and we see that this will be a good bound if $b$ is large and $a$ is small (so that $\frac{a}{a+1}$ is small) and $n$ is also small, but probably not otherwise.

For ease of notation, from here on write $r = \frac{a}{a+1}$, so that $a \in (0, \infty) \Leftrightarrow r \in (0, 1)$. To get a sharper upper bound we can use the weighted AM-GM inequality which gives

$$\begin{align} J(b, n) &\le \int_0^1 \left( 1 - \frac{b+rn-1}{b+n-1} y\right)^{b+n-1} \, dy \\ &= \frac{b+n-1}{(b+n)(b+rn-1)} \left( 1 - \left( 1 - \frac{b+rn-1}{b+n-1} \right)^n \right) \\ &\le \frac{b+n-1}{(b+n)(b+rn-1)} \\ &\le \frac{1}{b+rn-1} \end{align}$$

which is pretty close to matching the lower bound $\frac{1}{b+n}$ and gets closer the closer $r$ is to $1$, or equivalently the larger $a$ is. A similar bound which may be easier to analyze and think about comes from applying the inequality $(1 - x)^n \le \exp(-nx)$, which gives

$$\begin{align} J(b, n) &\le \int_0^1 \exp \left( -(b+rn-1) y \right) \, dy \\ &= \frac{1 - \exp \left( -(b+rn-1) \right)}{b+rn-1} \\ &\le \frac{1}{b+rn-1}. \end{align}$$

Altogether we get an improved range (although only improved on the upper bound side)

$$\boxed{ \frac{1}{b+n} \le J(b, n) \le \frac{1}{b+rn-1} }$$

with sharper but more complicated upper bounds available as desired.

One way to think about this upper bound is the following. By the central limit theorem, the binomial distribution

$$\mathbb{P}(X = v) = \frac{1}{(a+1)^n} {n \choose v} a^v$$

(which we are computing the expected value of $\frac{1}{b+X}$ with respect to) is asymptotically Gaussian with mean $\frac{an}{a+1} = rn$, and decays faster than exponentially away from this mean. This means that the original sum $\sum {n \choose v} \frac{a^v}{b+v}$ is mostly dominated by the terms that occur when $v \approx rn$ (more precisely, $v$ within some constant number of standard deviations to $rn$), which gives $\frac{1}{b+v} \approx \frac{1}{b+rn}$. So the appearance of $b+rn$ isn't surprising from this point of view.


Edit 2: I can match the lower bound now. For $r \in (0, 1)$ we have the inequality

$$(1 - y)^{rn} \le (1 - ry)^n$$

(which follows from the reversed Bernoulli's inequality $(1 - y)^r \le 1 - ry$), which gives

$$J(b, n) \ge \int_0^1 (1 - y)^{b+rn-1} \, dy = \frac{1}{b+rn}$$

so we finally have the very nice-looking

$$\boxed{ \frac{1}{b+rn} \le J(b, n) \le \frac{1}{b+rn-1} }$$

which I'm pretty happy with at this point!