Why does the commutative property of addition not hold for conditionally convergent series?

1.4k Views Asked by At

I learned about the Riemann rearrangement theorem recently and I'm trying to develop an intuition as to why commutativity breaks down for conditionally convergent series.

I understand the technique used in the theorem, but it just seems really odd that commutativity breaks down to me. It doesn't happen for other properties -- associativity holds for convergent series and commutativity holds for absolutely convergent series. What makes this particular property for this particular kind of convergent series "special" in this way?

I'm aware of this question: why does commutativity of addition fail for infinite sums? but the answers haven't been helpful for me. JiK's answer is "why would it?", and goes on to talk about why you can't apply rules to infinite series, but this seems erroneous because associativity holds for convergent series and commutativity holds for absolutely convergent series. Fly by Night just explains conditionally vs absolutely convergent, josh314 says it applies to finite sums only which isn't true, and Denis just explains the theorem again, and Barry Cipra seems to have a similar kind of argument as JiK's and problematic for similar reasons.

Is there a good way to understand why this is happening intuitively? or is this the wrong way to think about it and I should just accept that it's happening even though it's unintuitive? It's hard for me to just let it go without an intution because it seems like math starts to "break" here.. the theorem is sound but this property no longer holds in this case, which is really strange to me

Does anyone know of any resources that go into depth on this kind of question?

4

There are 4 best solutions below

0
On

"commutativity" is normally defined for two elements (addition is at root a binary operation) and extended to any finite number of elements.

The limit operation for infinite sums is defined without reference to the commutative property. There is no reason therefore for the limit operation to respect commutativity.

In the case of conditional convergence I think it is intuitive to see that (a) the sum of the positive terms alone is unbounded; (b) the sum of the negative terms alone is unbounded; (c) if I can change the order however I like I can push the negative terms as far down the order as I care to determine, and because the sum of the positives is unbounded, I can overwhelm each negative by positives; (d) I can likewise push the positives as far as I care down the order, and overwhelm the positives; hence (e) different orders have different behaviours; and (f) convergence and limits depend on order.


To add a second thought, which may be helpful. I can change the order of a finite number of summands in an infinite sum without changing convergence or limit properties. Why? Because then there is an $N$ after which nothing has changed, and the sum up to $N$ is also the same as it originally was.

So we could say that the (finite) commutative property does still apply.

5
On

The issue here is not really "commutativity". The question is, why can rearranging the terms of a series cause it to converge to a different value?

The answer is that the definition of the limit of a series uses the order of the terms to define the sequence of partial sums, and then takes the limit of that sequence. If you rearrange the terms, you get a different sequence of partial sums, and there is no reason the new sequence of partial sums needs to converge to the same limit as the old one.

There is a kind of convergence that does not use the order of the set of terms being added: unordered summation. If $A$ is a set of real numbers, then $\sum_{i \in A} i$ is defined as the limit of $\sum_{i \in F} i$ over the net of finite subsets $F$ of $A$. Unordered summation has no "conditional convergence": we have that $\sum_{i \in A} i$ converges if and only if $\sum_{i \in A} |i|$ converges.

This helps show how the phenomenon of conditional convergence is related to the fact that rearrangements of a series can converge to different values: it is because of the particular way that the limit is taken in the definition of series convergence.

The same phenomenon happens in integration. An improper integral such as $\int_1^\infty f(x)\,dx$ converges if and only if $\lim_{k \to \infty} \int_1^k f(x)\,dx$ converges. So "rearranging the values" of the function $f(x)$ in particularly simple ways can cause the improper integral to converge to a different value, if the integral is conditionally convergent.

For example, for each series $\sum a_n$ there is a piecewise constant function $f(x)$ so that $\int_1^\infty f(x)\,dx$ is exactly $\sum a_n$. Namely, $f(x)$ has value $a_i$ on $[i, i+1)$. Rearranging the pieces of the graph of $f(x)$ is the same as rearranging the series $a_n$. So the improper Riemann integral does not just compute the area between the $x$ axis and the graph of $f(x)$, and the series $\sum a_n$ does not just add up the terms. The definitions of convergence rely fundamentally on an underlying order relation to take a limit.

The alternative kind of integration known as Lebesgue integration is more like unordered summation. There is no conditional convergence in Lebesgue integration, and rearranging function values (via a measure preserving transformation) cannot cause a convergent Lebesgue integral to converge to a different value.

0
On

Consider a series that is convergent but absolutely divergent, such as $$\left\lbrace u_n= \frac{(-1)^n}{n}\right \rbrace_{n\geq 1}$$

Consider the positive and negative parts: $$u_n^+=u_n\qquad \text{if}\ \ u_n>0\qquad\qquad u_n^+=0\qquad\ \text{otherwise}$$ and $$u_n^-=u_n\qquad \text{if}\ \ u_n<0\qquad\qquad u_n^-=0\qquad\ \text{otherwise}$$ Claim: $\sum u_n^+$ diverges, and so does $\sum u_n^-$.

Proof: At least one of them has to diverge, since $\sum |u_n|$ diverges. But then the other one has to diverge as well, otherwise $\sum u_n$ would diverge.

So our series contains enough positive numbers to tend to infinity and enough negative numbers to tend to infinity in the other direction!

Claim 2: For any $\varepsilon >0$, $u_n$ contains only finitely many terms of absolute value larger than $\varepsilon$.

Proof: This follows directly from the fact that $\sum u_n$ converges.

Intuitive proof of Riemann's theorem: By Claim 1, one can take a sum of some suitably chosen $u_n$ so as to get close to any target $L\in \Bbb R$, and by Claim 2 one can decide to pick the $u_n$ in a suitable order so as to stay close to $L$ (and even tend to $L$).

One can also decide to pick the elements in an order that will make the sum diverge. The trick is to realise that a bijection $\Bbb N\to \Bbb N$ can really mess badly with the intuitive order. You could take the two bigger positive $u_n$, then a negative, then the next $4$ bigger positive $u_n$, then the next negative, then the next $8$ positive $u_n$... that would be a bijection. It would not necessarily diverge though. Here is one that surely does: take enough positive $u_n$ to get over $10$, then one negative, then enough positive to get over $100$, then one negative, ... this is still a perfectly fine reordering!

0
On

I think the 'right' intuition you should have here is:

Commutativity of addition only guarantees that a sum is preserved after finitely many terms are swapped.

It so happens that the above still 'holds' for an infinite sum, since it is defined as a limit of partial sums and hence from a certain point onwards the swapped terms are all included in the partial sum.

It does not imply that you can use an arbitrary permutation of the terms in the infinite series, as there are obviously permutations that cannot be expressed as a finite composition of swaps.