Can an infinite series be thought of as adding up "infinitely many" terms?

3.2k Views Asked by At

Formally, I understand that infinite series are not defined by adding up "infinitely many" terms, but are instead defined as equalling their limit. As user Brian M. Scott outlined in an answer to a similar question (Why is an infinite series not considered an infinite sum of terms?), it is easier to define infinite series by the limit of their partial sums than by considering every single term. However, one geometric proof of the convergence of $1/2 + 1/4 +1/8+1/16+ \cdots$ has made me question why we circumvent the problem of adding infinitely many terms with the concept of the limit:

enter image description here

Image credit: https://www.mathsisfun.com/algebra/infinite-series.html

When you look at the diagram above, it seems like every single term has been included (not literally, but it is clear what the diagram represents). Furthermore, if you plotted the above shape on the Cartesian plane, the area of the shape would be $1$. Even the point $(0.99999,0.99999)$ would be covered by a square/rectangle. When all of the terms have been plotted, it seems like the infinite series not only tends to 1, it equals $1$. To me, this is not just because we define an infinite series to equal its limit. The limit only concerns the partial sums, whereas the diagram shows that even if we allow ourselves to add infinitely many terms, there is no immediate contradiction. Obviously, defining infinite series like this formally can create problems: infinite series do not have always have the commutative property, for example. However, is there anything conceptually wrong with thinking of infinite series as adding up infinitely many terms, even though technically this can lead to some problems?

6

There are 6 best solutions below

7
On BEST ANSWER

There is a reason why it appears that the sum of those areas is literally the sum of infinitely many terms, and not 'just a limit'. However, this reason is probably not as simple as you think. To see why, consider something that looks similar:

The unit square $[0,1]×[0,1]$ has area $1$ is the union of infinitely many line segments of the form $\{x\}×[0,1]$ for each $x∈[0,1]$, but each of these line segments has area $0$, so the unit square's area is not in any way the sum of infinitely many zeros.

But what is the difference? Well, what is area in the first place? One possible basic notion is to first define the area of each rectangle as the width times height, and then define the area of a region as the total area of grid squares completely contained inside the region as the grid square size tends to zero. This definition has some significant defects. For instance, $[0,1]^2∖\mathbb{Q}^2$ would have area $0$, even though 'almost none' of the unit square has been removed.

It turns out that we can define area that has nicer properties than the above, via something called the Lebesgue measure. This measure assigns an area to every measurable set of points in the plane, and this area is a non-negative real number or $∞$. But not every set of points in the plane is measurable! For convenience let me use "region" as a synonym for "measurable set of points". This measure has three nice properties:

  1. Rectangle area: The area of a rectangle is equal to its height times width. (If this does not hold then we do not deserve to call it "area".)

  2. Monotonicity: If one region is completely contained in another region, then the area of the first region is at most the area of the second.

  3. Countable additivity: The area of a countable disjoint union of regions is the sum of their areas. Note that since areas are non-negative, the sum is well-defined. (This is directly related to the fact that a monotonically increasing sequence of reals has a real limit or tends to $∞$.)

Now let us get back to your question. You have a countable sequence of disjoint rectangles whose areas sum to $1$, and whose union is the unit square. This corresponds to the countable additivity of the Lebesgue measure. In contrast, my example has an uncountable number of line segments each with area $0$, but whose union is still the unit square. Well, the Lebesgue measure does not satisfy uncountable additivity. (In fact this example shows that any kind of 'area' that satisfies property (1) cannot possibly satisfy uncountable additivity in any meaningful sense.)

Observe that this viewpoint of an infinite (countable) sum only works for non-negative reals, otherwise you cannot view it as areas and the correspondence with the Lebesgue measure fails. This is directly related to the fact that an infinite sum of non-negative reals is well-defined; the partial sums tend monotonically to either a finite real or infinity, and the fact that an infinite sum of arbitrary reals may not be well-defined, such as $1-1+1-1+\cdots$.

The limit only concerns the partial sums, whereas the diagram shows that even if we allow ourselves to add infinitely many terms, there is no immediate contradiction. Obviously, defining infinite series like this formally can create problems: infinite series do not have always have the commutative property, for example. However, is there anything conceptually wrong with thinking of infinite series as adding up infinitely many terms, even though technically this can lead to some problems?

Indeed, your idea that commutativity is relevant is correct. Note that an infinite sum of non-negative reals is well-defined, because rearranging them cannot change the limit of partial sums (this is a good exercise and the proof may be illuminating). In contrast, an infinite sum of reals that have both positive and negative terms may not be absolutely convergent, and it turns out that absolute convergence is equivalent to unchanging limit of partial sums.

So the answer to your question is: Yes you can view an infinite sum of non-negative reals as a summation of infinite terms in the same sense as the disjoint union of regions, but no you cannot view every infinite series in this way because reals can be negative, and no you cannot use any kind of reasoning that if "all of the terms have been plotted" geometrically then the infinite sum is equal to the area, because under any reasonable definition (see below) we should have $\sum_{i∈S} 0 = 0$ for any set $S$, even if $S$ is the set of line segments that cover the unit square like in the example I gave (whereas the area of the unit square is $1$).


For the curious, here is how we can define generalized summation of a non-negative real-valued function on an arbitrary index set. I wish to emphasize that without any definition we cannot even talk about such infinite sums (they are simply meaningless) to begin with, and so it is incorrect to perform any kind of reasoning about them at all, geometric or otherwise. (This is why I at first decided not to mention it in my post, but I now agree with the commenter Martin that I probably should clear up any possible misconception on this issue.)

Given a function $f:S→R_{≥0}$, we can define $\sum_{i∈S} f(i) := \sup \{ \sum_{i∈T} f(i) : T ⊆_{fin} S \}$, where "$T ⊆_{fin} S$" means "$T$ is a finite subset of $S$", and this agrees with the standard summation in any order if $S$ is countable. But this generalized summation is useless for uncountable $S$ unless $f(i) > 0$ for only countably many $i∈S$ (since otherwise the sum would be $∞$), so such a definition seems to have no practical use.

0
On

I compare it to proving that the natural numbers have as many elements as the positive rationals. You can demonstrate a bijection between them with enough detail that you can tell what the millionth term in the sequence in or what is the index of 335/113, but we need to come to terms with the notion that "counting to infinity" is something that we can only consider under the terms with which we have defined those words.

The same thing is happening here. I could ask for any point on that square except the upper-right corner, and you'd be able to tell me the label of the unique rectangle that includes it. But you're still only drawing a finite number of those rectangles and you still can't fill in that upper-right corner. To respect those important points, we say that the limit of that series is 1 instead of saying that the sum is 1.

3
On

The answer to your question is essentially here:

Obviously, defining infinite series like this formally can create problems

It is intuitively OK to think of summing an infinite series as

adding up "infinitely many" terms

but if you want to prove theorems about those sums and avoid the problems you need a formal definition. Mathematicians have discovered that a good way to do that is to avoid "adding infinitely many numbers" by proving infinitely many statements. That's what the "for every $\epsilon$" does in the formal definition of limits.

0
On

I would say it's best to think of adding up an infinite number of terms as something that makes intuitive sense but has more than one reasonable definition.

To illustrate this, I'll define three sums. None of the "flavored sum" notation is remotely standard.

${}^\alpha\!\sum$ which is only defined when the series is absolutely convergent. ${}^\sigma\!\sum$ is the familiar limit sum and ${}^c\!\sum$ is the Cesàro sum. There are other, more general sums that I won't cover in this answer because I don't understand them very well.

The notation below, $\sum \cdots = \xi \iff \!\cdots\, $, is intended to emphasize the fact that not every sum converges to a real number.

$${}^\sigma\!\sum_{k=1}^{\infty}f(k) = \xi \stackrel{\mathrm{def}}{\iff} \lim_{n\to\infty} \sum_{k=1}^{n}f(k) = \xi$$

$${}^\alpha\!\sum_{k=1}^{\infty}f(k) = {}^\sigma\!\sum_{k=1}^{\infty} f(k) \stackrel{\mathrm{def}}{\iff} {}^\sigma\!\sum_{k=1}^{\infty} |f(k)| \;\;\text{is defined} $$

Following Wikipedia's lead, let's define the Cesàro sum.

$$ s_a(m) \stackrel{\mathrm{def}}{=} \sum_{k=1}^{m} a(n) $$

$$ {}^c\!\sum_{k=1}^{\infty} a(k) = \xi \stackrel{\textrm{def}}{\iff} \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^{n} s_a(k) = \xi $$

Now let's show some examples distinguishing the type of sums.

$$ \left\{ {}^\sigma\!\sum_{k=1}^{\infty} \frac{1}{k^2} \;,\; {}^\alpha\!\sum_{k=1}^{\infty} \frac{1}{k^2} \;,\; {}^c\!\sum_{k=1}^{\infty} \frac{1}{k^2} \right\} = \left\{ \frac{\pi^2}{6} \right\} $$

$$ \left\{ {}^\sigma\!\sum_{k=1}^{\infty} \frac{(-1)^k}{k^2} \;,\; {}^c\!\sum_{k=1}^{\infty} \frac{(-1)^k}{k^2} \right\} = \bigg\{ - \ln(2) \bigg\} \;\;\text{but}\;\; {}^\alpha\!\sum_{k=1}^{\infty}\frac{(-1)^k}{k} \;\;\text{is undefined} $$

$$ {}^c\!\sum_{k=1}^{\infty} (-1)^k = -\frac{1}{2} \;\;\text{but}\;\; {}^\alpha\!\sum_{k=1}^{\infty} (-1)^k \;\;\text{and}\;\; {}^\sigma\!\sum_{k=1}^{\infty} (-1)^k \;\; \text{are both undefined} $$

0
On

To talk of "adding up infinitely many terms", you have to have some definition first of what exactly it means to do that. The general consensus is that the limit definition is meant.

The issue at work here is not so much the formal definition as a limit, but the intuitive understandings we choose to bring to bear upon that, and there are at least two, and what is going on here is seeing a limit as dynamic, or the end result of a dynamic process. There isn't anything "wrong" with that, unless then you want to compare it to a static situation, which is what a single geometric figure, existing "at once", "simultaneously", on the plane is.

The intuition for a limit as static situation is that it's a statement about a certain property the square you give has: namely, if you draw a circle of any radius you want about that upper-right corner, no matter how small, you can always find pieces smaller in area than that circle inside (from which it follows also that it must contain infinitely many pieces). It's not a matter of progressively drawing such circles one after the other, but rather that the property is in the form of stating what would happen in infinitely many possible counterfactual situations with the given object.

3
On

Assuming the limit of the sequence of partial sums exists, there is no harm in imagining that you have completed an infinite process of adding terms. If the limit does not exist, there are fairly obvious problems with pretending that you have completed the infinite process. For instance, the sequence of partial sums of $1 - 1 + 1 - 1 + 1 - \cdots$ has no limit, but if you imagine that you have completed the infinite process, you imagine that you have a definite value for that sum. What is it? $1$? $0$? Something else? (For more options that $0$ or $1$, see Hardy's Divergent Series, or the English Wikipedia's Divergent Series.)

More fundamentally, what does "$1 + \frac{1}{2} + \frac{1}{4} + \cdots$" mean? It appears to represent an infinitely long string of symbols. If, as is usually the case, you are using a finitary logic, you do not have the ability to reason about the entire string. (You are able to reason about finite fragments of it). For instance, $$ 2 \sum_{i-1}^\infty 2^{-i-1} = \sum_{i=1}^\infty 2^{-i} $$ is not the distributive law of multiplication over addition, since there is no infinite version of that law in finitary logic. The proof of the statement only ever reasons about finite initial chunks of the sums (partial sums), so is encompassable by finitary logic.

If you are using an infinitary logic, you are able to reason about infinitely long strings of symbols, like "$1 + \frac{1}{2} + \frac{1}{4} + \cdots$". But be careful, the Riemann Rearrangement Theorem suggests that an infinite version of commutativity of addition only applies to certain such infinite strings -- infinite commutativity does not apply to strings representing conditionally convergent series in the sense that the result of "adding infinitely many terms" changes depending on the resulting ordering of the terms. The MSE answer to this question does a pretty good job of laying out some of the weirdness of infinitary logics. An example that isn't discussed there: your set theory doesn't exactly need AC since your logic permits you to just write down infinitely long choice functions: "from set 1, this element; from set 2, that element; ..." (Note that varied infinitary logics may put different infinite limits on the lengths of sentences. So your logic may permit "small" infinite choice but you'll need a version of AC for larger choice.)