Using discrete calculus to study convergence of series and sequences

295 Views Asked by At

From some personal investigation, I've noticed that all convergence tests for infinite series (at least, the real kind) can be rephrased in terms of the discrete derivative $∆f(x)$ of a function $f(x)$, sometimes to give interesting results.

For example, the statement: $$\text{If } \lim_{k → ∞} a_k ≠ 0 \text{ then } ∑a_k \text{ diverges.}$$

Is equivalent to the statement: $$\text{If } \lim_{x → ∞} ∆f(x) ≠ 0 \text{ then } f(x) \text{ diverges.}$$ Except that the second statement is arguably more general, since $f(x)$ is not explicitly required to be in the form of a sum. Note that the limit of $∆ f(x)$ follows the definition of a limit of a sequence, rather than a function.

Another example, is the ratio test which is given: $$ \text{If } \lim_{k → ∞} \frac{a_{k+1}}{a_k} < 1 \text{ then } ∑a_k \text{ converges.}$$

This statement is equivalent to: $$ \text{If } \lim_{x → ∞} \frac{∆^{\!2} f(x)}{∆ f(x)} < 0 \text{ then } f(x) \text{ converges.}$$

Besides said generalization, this representation is also interesting because proving convergence tests in terms of $∆ f(x)$ and $f(x)$ does not require manipulating sums directly. Instead, the proofs can be given in terms of basic theorems about $∆ f(x)$. They generally resemble proofs you'd find in calculus.

Is anyone familiar with some thorough treatment of studying infinite series using discrete calculus in this way?