Truncation of infinite summation involving a probability mass function

341 Views Asked by At

Suppose we want to compute an infinite sum of the form $S := \sum_{x=0}^\infty f(x)p(x)$ within a tolerance error $\epsilon$, where $f : \mathcal{X} \to \mathbb{R}^+$ is some functional of interest and $p(x)$ is a probability mass function. For convenience, write $S = S_b + S_a$.

If $g(x) := f(x)p(x)$ achieves a single maximum, $g(n^\star)$, I claim one can use some standard results on infinite series (e.g. the ones presented here) to approximate $S_a = \sum_{k = n^\star +1}^\infty g(k)$ by exploiting the fact that for $n > n^\star$ we have a sequence which is positive, decreasing with $r_n:= g(n+1)/g(n)$ such that $\lim_{n\to\infty} r_n = 0$. If we define $z_n = \frac{g(n + 1)}{1 - r_n}$ $-$ see equation (2) in the linked notes $-$, I think we can finally claim that there exists $n^\prime(\epsilon)$ such that $$ \left|S - \left(S_b + S_{n'} + \frac{z_{n'}}{2}\right)\right| < \epsilon, $$ where $S_b := \sum_{k= 0}^{n^\star} g(k)$ and $S_{n^\prime}:= \sum_{j= n^\star}^{n^\prime} g(j)$.

Questions:

  1. I couldn't find these calculations anywhere on the web, suggesting it's not in the toolkit of statisticians. Is this technique useful in general or is the restriction of a single maximum a bit much?
  2. Is there a more general solution along the same lines that relaxes these assumptions (positivity of $f$ and the concavity of $pf$)?