Hunter Liu's Website

7. Rearrangement of Series

≪ 6. lim sup\limsup, lim inf\liminf, and Cauchy sequences | Table of Contents | 8. Continuity ≫

Last week, we spent some time developing the root test and the ratio test for the convergence of series; these are two of a great number of applications of the theory we have been developing most recently.

Today, let’s zoom in on series a bit and focus on one topic that is oft left alone: rearranging series. A staple of math education is the commutative law of addition: you can add numbers in whatever order you like and still get the same answer.

One might expect that the commutative law of addition holds for series, which are just “infinite sums”. A series k=1ak\sum _{k=1}^{\infty} a_k should give you the same answer, even if you add the terms in a different order. However, returning to the definition of a series illuminates some shadows cast by all-too-suggestive notation.

Definition 1. Series

If (ak)\left( a_k \right) is a sequence of real numbers, the series k=1ak\sum _{k=1}^{\infty}a_k is defined as the limit of partial sums limnSn,whereSn=k=1nak,\begin{align*} \lim_{n\to\infty} S_n, && \textrm{where} && S_n = \sum _{k=1}^{n}a_k,\end{align*} as long as the limit exists.

Rearranging the order in which we list the terms aka_k will more likely than not change the sequence of partial sums. But let’s not get ahead of ourselves, let’s first indicate what a “rearrangement” of a series is.

Formally, a rearrangement is a bijective function σ:NN\sigma : \mathbb{N}\to \mathbb{N}. The rearranged sum is k=1aσ(k)\sum _{k=1}^{\infty} a _{\sigma (k)}. One should think of σ(k)\sigma (k) as a “relabelling” of the indices. σ(k)\sigma (k) represents the index of the kk-th term in the relabelled series. The bijectivity condition is to ensure that every index is eventually summed over once and also that no index is repeated in the rearranged series.

A simple example of a rearrangement function is swapping pairs of adjacent indices: rather than summing a1+a2+a3+a4a_1 + a_2 + a_3 + a_4 \cdots, we would add them in the order a2+a1+a4+a3+a_2 + a_1 + a_4 + a_3 + \cdots. In this case, σ(1)=2\sigma (1) = 2, σ(2)=1\sigma (2) = 1, σ(3)=4\sigma (3) = 4, σ(4)=3\sigma (4) = 3, etc. There are more sophisticated rearrangements than this, of course. Every other partial sum of the two sequences agree, hence if either series (unperturbed or rearranged) converges, so shall the other.

However, there are more sophisticated rearrangements out there, and in general you cannot rearrange sums willy-nilly. Here’s an explicit example: define the sequence a2n1=1nanda2n=1n.\begin{align*} a _{2n-1} = \frac{1}{n} && \textrm{and} && a _{2n} = -\frac{1}{n}.\end{align*} The first several terms of this sequence are 1,1,12,12,13,13,14,14,1, -1, \frac{1}{2}, -\frac{1}{2}, \frac{1}{3}, -\frac{1}{3}, \frac{1}{4}, -\frac{1}{4}, \ldots

Exercise 2.

Show that k=1ak=0\sum _{k=1}^{\infty}a_k = 0, with aka_k defined as above.

I won’t provide the explicit rearrangement (my soul cannot bear it), but here’s the pattern: leave the first two terms alone, then sum more and more and more of the positive terms, dropping in one negative term at a time. 11+12+13+1412+15+16+17+18+19++11613+117+132+133++16414+\begin{align*} 1 - 1 &+ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} \\ -\frac{1}{2} &+ \frac{1}{5} + \frac{1}{6} + \frac{1}{7}+\frac{1}{8} + \frac{1}{9}+\cdots + \frac{1}{16} \\ -\frac{1}{3} &+ \frac{1}{17} + \cdots \frac{1}{32} + \frac{1}{33} + \cdots + \frac{1}{64} \\ -\frac{1}{4} &+ \cdots \end{align*}

In the first line, the 11 and 1-1 cancel. The remaining three terms add to something larger than 11. We subtract 12\frac{1}{2}. Then, 15++18>12\frac{1}{5}+\cdots+\frac{1}{8} > \frac{1}{2}, since all four terms are individually larger than 18\frac{1}{8}. Likewise, 19++116>12\frac{1}{9}+\cdots+\frac{1}{16} > \frac{1}{2} . The positive terms on the second line add up to more than 11. Thus, we gained a net positive on the second line.

Similarly, the positive terms on the third line add up to more than 11 while we only subtracted off 13<12\frac{1}{3} < \frac{1}{2}, netting an increase by at least 12\frac{1}{2}. This happens on every single line!

We’ve therefore given an example of two series, one a rearrangement of the other, wherein one converges to 00 and the other diverges. What’s happened is exactly what I alluded to earlier: by selecting a pathological rearrangement of our series, I’ve changed my sequence of partial sums by so much that they no longer converge to anything.

Part of the problem is that the sequence itself has enough “mass” to throw around: I was able to leverage an inexhaustable supply of positive terms while drip-feeding only a little bit of negative cancellation. If the sequence had converged absolutely, I wouldn’t be able to keep pushing the partial sums upward and upward ad infinitum the same way as above. We can formalise this intuition as:

Theorem 3.

k=1ak=k=1aσ(k)\sum _{k=1}^{\infty} a_k = \sum _{k=1}^{\infty} a _{\sigma (k)} for all rearrangements σ\sigma if either ak0a_k \geq 0 for all kk or k=1ak<\sum _{k=1}^{\infty} \left\lvert a_k \right\rvert < \infty (i.e., the sequence is absolutely convergent).

Proof

Suppose the sequence is absolutely convergent. Let σ\sigma be any rearrangement, and fix ϵ>0\epsilon > 0. Let S=k=1akS = \sum _{k=1}^{\infty}a_k.

By definition of convergence, there exists N1N_1 such that for all n>N1n > N_1, we have Sk=1nak<ϵ2.\left\lvert S - \sum _{k=1}^{n} a_k \right\rvert < \frac{\epsilon }{2}. Likewise, since k=1ak\sum _{k=1}^{\infty} \left\lvert a_k \right\rvert converges, its partial sums are Cauchy. Thus, there exists N2N_2 such that for all n>m>N2n > m > N_2, k=1nakk=1mak=k=m+1nak<ϵ2.\sum _{k=1}^{n} \left\lvert a_k \right\rvert - \sum _{k=1}^{m} \left\lvert a_k \right\rvert = \sum _{k=m+1}^{n} \left\lvert a_k \right\rvert < \frac{\epsilon }{2}. Let N=max{N1,N2}+1N = \max \left\lbrace N_1, N_2 \right\rbrace + 1.

Since σ\sigma is bijective, it has an inverse. Let M=max{σ1(1),σ1(2),,σ1(N)}.M = \max \left\lbrace \sigma ^{-1}(1), \sigma ^{-1}(2), \ldots, \sigma ^{-1}(N) \right\rbrace. The idea is if we sum aσ(1)+aσ(2)++aσ(M)a _{\sigma (1)} + a _{\sigma (2)} + \cdots + a _{\sigma (M)}, we will for sure include the terms a1+a2+aNa_1 + a_2 + \cdots a_N in the sum, which is within ϵ2\frac{\epsilon }{2} of the limit SS. There will be very many leftover terms, but there we use the absolute convergence to argue that they do not stray away from the partial sum a1++aNa_1+\cdots + a_N by very much.

If you can get that main idea down, I’m happy. The remainder of this proof will be the technical presentation of this idea.

For any m>Mm > M, define M=max{σ(1),σ(2),,σ(m)},M’ = \max \left\lbrace \sigma (1), \sigma (2), \ldots, \sigma (m) \right\rbrace, noting that M>NM’ > N because σ1(1),,σ1(N)\sigma ^{-1}(1), \ldots, \sigma ^{-1}(N) are all less than M<mM < m by construction. Then, we may split up the sum k=1maσ(k)=a1+a2++aN+k=1,,mσ(k)>Naσ(k).\sum _{k=1}^{m} a _{\sigma (k)} = a _{1} + a _{2} + \cdots + a_N + \sum _{\substack{k = 1,\ldots, m \\ \sigma (k) > N}} a _{\sigma(k)}. Thus, we have Sk=1maσ(k)Sk=1Nσ(N)+k=1,,mσ(k)>Naσ(k)ϵ2+k=1,,mσ(k)>Naσ(k)ϵ2+k=N+1Makϵ2+ϵ2.\begin{align*} \left\lvert S - \sum _{k=1}^{m}a _{\sigma (k)} \right\rvert & \leq \left\lvert S - \sum _{k=1}^{N}\sigma (N) \right\rvert + \left\lvert \sum _{\substack{k = 1,\ldots, m \\ \sigma (k) > N}} a _{\sigma (k)} \right\rvert \\ & \leq \frac{\epsilon }{2} + \sum _{\substack{k = 1,\ldots, m \\ \sigma (k) > N}} \left\lvert a _{\sigma (k)} \right\rvert \\ & \leq \frac{\epsilon }{2 } + \sum _{k=N + 1}^{M’} \left\lvert a _{k} \right\rvert \\ & \leq \frac{\epsilon }{2} + \frac{\epsilon }{2}. \end{align*} The first inequality is the triangle inequality. The second inequality is from our convergence condition (N>N1N > N_1) and the triangle inequality applied to the second sum. The third inequality is because the sum on the third line contains every term in the sum on the preceeding line; it can only go up in value. The last inequality is from the Cauchy criterion, for M,N+1N2M’, N + 1 \geq N_2.

Thus, we have shown that for all m>Mm > M, Sk=1maσ(k)<ϵ,\left\lvert S - \sum _{k=1}^{m} a _{\sigma (k) } \right\rvert < \epsilon, hence the rearranged series converges to the same limit as the original series. \square

One can replace the “for all kk” with “for all but finitely many kk” and the theorem would still be true. In fact, this is an if and only if, and the converse statement is part of a famous theorem:

Theorem 4. Riemann's Rearrangement Theorem

Let {ak}\left\lbrace a_k \right\rbrace be a sequence of real numbers for which k=1ak\sum _{k=1}^{\infty} a_k exists, but k=1ak\sum _{k=1}^{\infty} \left\lvert a_k \right\rvert does not. Then, for each real number rr, there exists a rearrangement σ\sigma such that k=1aσ(k)=r\sum _{k=1}^{\infty}a _{\sigma (k)} = r.

In other words, you can make a conditionally convergent sequence converge to any number you want, with the right rearrangement. I won’t prove this; it’s very tedious and takes a lot of time.

Theorems like the root test and the ratio test are link great, sturdy branches in the tree of mathematics, offering structural support and footholds for those attempting to climb it. These preceeding theorems, in contrast, are more like wasps’ nests, catching unsuspecting and unprepared tree-climbers who aren’t careful where they step. The point of this discussion is that infinite series, despite what their notations suggest, do not always behave the same way that sums do, and one ought to take great caution when approaching them.

Exercise 5.

Show that if aka_k and bkb_k are two sequences for which k=1ak\sum _{k=1}^{\infty} a_k and k=1bk\sum _{k=1}^{\infty} b_k converge (absolutely or conditionally), then k=1(ak+bk)=k=1ak+k=1bk.\sum _{k=1}^{\infty} \left( a_k + b_k \right) = \sum _{k=1}^{\infty} a_k + \sum _{k=1}^{\infty} b_k. Give an example of two sequences aka_k and bkb_k for which k=1(ak+bk)\sum _{k=1}^{\infty} \left( a_k + b_k \right) converges absolutely, but for which neither k=1ak\sum _{k=1}^{\infty}a_k nor k=1bk\sum _{k=1}^{\infty}b_k converge.