Hunter Liu's Website

7. Rearrangement of Series

≪ 6. \(\limsup\), \(\liminf\), and Cauchy sequences | Table of Contents | 8. Continuity ≫

Last week, we spent some time developing the root test and the ratio test for the convergence of series; these are two of a great number of applications of the theory we have been developing most recently.

Today, let’s zoom in on series a bit and focus on one topic that is oft left alone: rearranging series. A staple of math education is the commutative law of addition: you can add numbers in whatever order you like and still get the same answer.

One might expect that the commutative law of addition holds for series, which are just “infinite sums”. A series \(\sum _{k=1}^{\infty} a_k\) should give you the same answer, even if you add the terms in a different order. However, returning to the definition of a series illuminates some shadows cast by all-too-suggestive notation.

Definition 1. Series

If \(\left( a_k \right)\) is a sequence of real numbers, the series \(\sum _{k=1}^{\infty}a_k\) is defined as the limit of partial sums \[\begin{align*} \lim_{n\to\infty} S_n, && \textrm{where} && S_n = \sum _{k=1}^{n}a_k,\end{align*}\] as long as the limit exists.

Rearranging the order in which we list the terms \(a_k\) will more likely than not change the sequence of partial sums. But let’s not get ahead of ourselves, let’s first indicate what a “rearrangement” of a series is.

Formally, a rearrangement is a bijective function \(\sigma : \mathbb{N}\to \mathbb{N}\). The rearranged sum is \(\sum _{k=1}^{\infty} a _{\sigma (k)}\). One should think of \(\sigma (k)\) as a “relabelling” of the indices. \(\sigma (k)\) represents the index of the \(k\)-th term in the relabelled series. The bijectivity condition is to ensure that every index is eventually summed over once and also that no index is repeated in the rearranged series.

A simple example of a rearrangement function is swapping pairs of adjacent indices: rather than summing \(a_1 + a_2 + a_3 + a_4 \cdots\), we would add them in the order \(a_2 + a_1 + a_4 + a_3 + \cdots\). In this case, \(\sigma (1) = 2\), \(\sigma (2) = 1\), \(\sigma (3) = 4\), \(\sigma (4) = 3\), etc. There are more sophisticated rearrangements than this, of course. Every other partial sum of the two sequences agree, hence if either series (unperturbed or rearranged) converges, so shall the other.

However, there are more sophisticated rearrangements out there, and in general you cannot rearrange sums willy-nilly. Here’s an explicit example: define the sequence \[\begin{align*} a _{2n-1} = \frac{1}{n} && \textrm{and} && a _{2n} = -\frac{1}{n}.\end{align*}\] The first several terms of this sequence are \[1, -1, \frac{1}{2}, -\frac{1}{2}, \frac{1}{3}, -\frac{1}{3}, \frac{1}{4}, -\frac{1}{4}, \ldots\]

Exercise 2.

Show that \(\sum _{k=1}^{\infty}a_k = 0\), with \(a_k\) defined as above.

I won’t provide the explicit rearrangement (my soul cannot bear it), but here’s the pattern: leave the first two terms alone, then sum more and more and more of the positive terms, dropping in one negative term at a time. \[\begin{align*} 1 - 1 &+ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} \\ -\frac{1}{2} &+ \frac{1}{5} + \frac{1}{6} + \frac{1}{7}+\frac{1}{8} + \frac{1}{9}+\cdots + \frac{1}{16} \\ -\frac{1}{3} &+ \frac{1}{17} + \cdots \frac{1}{32} + \frac{1}{33} + \cdots + \frac{1}{64} \\ -\frac{1}{4} &+ \cdots \end{align*} \]

In the first line, the \(1\) and \(-1\) cancel. The remaining three terms add to something larger than \(1\). We subtract \(\frac{1}{2}\). Then, \(\frac{1}{5}+\cdots+\frac{1}{8} > \frac{1}{2}\), since all four terms are individually larger than \(\frac{1}{8}\). Likewise, \(\frac{1}{9}+\cdots+\frac{1}{16} > \frac{1}{2} \). The positive terms on the second line add up to more than \(1\). Thus, we gained a net positive on the second line.

Similarly, the positive terms on the third line add up to more than \(1\) while we only subtracted off \(\frac{1}{3} < \frac{1}{2}\), netting an increase by at least \(\frac{1}{2}\). This happens on every single line!

We’ve therefore given an example of two series, one a rearrangement of the other, wherein one converges to \(0\) and the other diverges. What’s happened is exactly what I alluded to earlier: by selecting a pathological rearrangement of our series, I’ve changed my sequence of partial sums by so much that they no longer converge to anything.

Part of the problem is that the sequence itself has enough “mass” to throw around: I was able to leverage an inexhaustable supply of positive terms while drip-feeding only a little bit of negative cancellation. If the sequence had converged absolutely, I wouldn’t be able to keep pushing the partial sums upward and upward ad infinitum the same way as above. We can formalise this intuition as:

Theorem 3.

\(\sum _{k=1}^{\infty} a_k = \sum _{k=1}^{\infty} a _{\sigma (k)}\) for all rearrangements \(\sigma \) if either \(a_k \geq 0\) for all \(k\) or \(\sum _{k=1}^{\infty} \left\lvert a_k \right\rvert < \infty\) (i.e., the sequence is absolutely convergent).

Proof

Suppose the sequence is absolutely convergent. Let \(\sigma \) be any rearrangement, and fix \(\epsilon > 0\). Let \(S = \sum _{k=1}^{\infty}a_k\).

By definition of convergence, there exists \(N_1\) such that for all \(n > N_1\), we have \[\left\lvert S - \sum _{k=1}^{n} a_k \right\rvert < \frac{\epsilon }{2}.\] Likewise, since \(\sum _{k=1}^{\infty} \left\lvert a_k \right\rvert\) converges, its partial sums are Cauchy. Thus, there exists \(N_2\) such that for all \(n > m > N_2\), \[\sum _{k=1}^{n} \left\lvert a_k \right\rvert - \sum _{k=1}^{m} \left\lvert a_k \right\rvert = \sum _{k=m+1}^{n} \left\lvert a_k \right\rvert < \frac{\epsilon }{2}.\] Let \(N = \max \left\lbrace N_1, N_2 \right\rbrace + 1\).

Since \(\sigma \) is bijective, it has an inverse. Let \[M = \max \left\lbrace \sigma ^{-1}(1), \sigma ^{-1}(2), \ldots, \sigma ^{-1}(N) \right\rbrace.\] The idea is if we sum \(a _{\sigma (1)} + a _{\sigma (2)} + \cdots + a _{\sigma (M)}\), we will for sure include the terms \(a_1 + a_2 + \cdots a_N\) in the sum, which is within \(\frac{\epsilon }{2}\) of the limit \(S\). There will be very many leftover terms, but there we use the absolute convergence to argue that they do not stray away from the partial sum \(a_1+\cdots + a_N\) by very much.

If you can get that main idea down, I’m happy. The remainder of this proof will be the technical presentation of this idea.

For any \(m > M\), define \(M’ = \max \left\lbrace \sigma (1), \sigma (2), \ldots, \sigma (m) \right\rbrace,\) noting that \(M’ > N\) because \(\sigma ^{-1}(1), \ldots, \sigma ^{-1}(N)\) are all less than \(M < m\) by construction. Then, we may split up the sum \[\sum _{k=1}^{m} a _{\sigma (k)} = a _{1} + a _{2} + \cdots + a_N + \sum _{\substack{k = 1,\ldots, m \\ \sigma (k) > N}} a _{\sigma(k)}.\] Thus, we have \[\begin{align*} \left\lvert S - \sum _{k=1}^{m}a _{\sigma (k)} \right\rvert & \leq \left\lvert S - \sum _{k=1}^{N}\sigma (N) \right\rvert + \left\lvert \sum _{\substack{k = 1,\ldots, m \\ \sigma (k) > N}} a _{\sigma (k)} \right\rvert \\ & \leq \frac{\epsilon }{2} + \sum _{\substack{k = 1,\ldots, m \\ \sigma (k) > N}} \left\lvert a _{\sigma (k)} \right\rvert \\ & \leq \frac{\epsilon }{2 } + \sum _{k=N + 1}^{M’} \left\lvert a _{k} \right\rvert \\ & \leq \frac{\epsilon }{2} + \frac{\epsilon }{2}. \end{align*} \] The first inequality is the triangle inequality. The second inequality is from our convergence condition (\(N > N_1\)) and the triangle inequality applied to the second sum. The third inequality is because the sum on the third line contains every term in the sum on the preceeding line; it can only go up in value. The last inequality is from the Cauchy criterion, for \(M’, N + 1 \geq N_2\).

Thus, we have shown that for all \(m > M\), \[\left\lvert S - \sum _{k=1}^{m} a _{\sigma (k) } \right\rvert < \epsilon,\] hence the rearranged series converges to the same limit as the original series. \(\square \)

One can replace the “for all \(k\)” with “for all but finitely many \(k\)” and the theorem would still be true. In fact, this is an if and only if, and the converse statement is part of a famous theorem:

Theorem 4. Riemann's Rearrangement Theorem

Let \(\left\lbrace a_k \right\rbrace\) be a sequence of real numbers for which \(\sum _{k=1}^{\infty} a_k\) exists, but \(\sum _{k=1}^{\infty} \left\lvert a_k \right\rvert\) does not. Then, for each real number \(r\), there exists a rearrangement \(\sigma \) such that \(\sum _{k=1}^{\infty}a _{\sigma (k)} = r\).

In other words, you can make a conditionally convergent sequence converge to any number you want, with the right rearrangement. I won’t prove this; it’s very tedious and takes a lot of time.

Theorems like the root test and the ratio test are link great, sturdy branches in the tree of mathematics, offering structural support and footholds for those attempting to climb it. These preceeding theorems, in contrast, are more like wasps’ nests, catching unsuspecting and unprepared tree-climbers who aren’t careful where they step. The point of this discussion is that infinite series, despite what their notations suggest, do not always behave the same way that sums do, and one ought to take great caution when approaching them.

Exercise 5.

Show that if \(a_k\) and \(b_k\) are two sequences for which \(\sum _{k=1}^{\infty} a_k\) and \(\sum _{k=1}^{\infty} b_k\) converge (absolutely or conditionally), then \[\sum _{k=1}^{\infty} \left( a_k + b_k \right) = \sum _{k=1}^{\infty} a_k + \sum _{k=1}^{\infty} b_k.\] Give an example of two sequences \(a_k\) and \(b_k\) for which \(\sum _{k=1}^{\infty} \left( a_k + b_k \right)\) converges absolutely, but for which neither \(\sum _{k=1}^{\infty}a_k\) nor \(\sum _{k=1}^{\infty}b_k\) converge.