7. Week 7: Power Series
≪ 6. Week 6: The Fundamental Theorems of Complex Analysis | Table of Contents | 8. Week 8: Orders of Zeroes ≫Let’s start today’s discussion off by remembering how terrible power series are in one real variable. Hopefully, you have at some point learned Taylor’s theorem, which unlocks the notion of a Taylor series. It say that if \(f\) is a smooth function near a point \(x_0\), then one has \[f(x) = \sum _{n=0}^{\infty} \frac{f ^{(n)}\left( x_0 \right)}{n!}\left( x-x_0 \right)^n\] for \(x\) sufficientnly close to \(x_0\).
For example, \(f(x) = \frac{1}{1-x}\) has a converging Taylor series around each point \(x_0\neq 1\), even though its Taylor series does not converge for all \(x\). The obstruction to convergence is the vertical asymptote, or “pole”, at \(x=1\). In fact, if you compute the Taylor series expansion at any point \(x_0\), you’ll find that the radius of convergence is *exactly* \(\left\lvert x-x_0 \right\rvert\).
For a function like \(f(x) = \frac{1}{1+x^2}\), \(f\) continues to be have converging Taylor series, but it faces a similar issue where the Taylor series centred at \(x_0\) has a finite radius of convergence, even though \(f\) does not appear to have a pole. Since we’re in a complex analysis class, rewriting \(f(z) = \frac{1}{1+z^2}\) shows there are two poles in the complex plane, at \(z=\pm i\). So even though we can’t “see” these poles on the real line, they continue to exert their influence on the radius of convergence.
And for a most pathological example, the function \(f(x) = e ^{-x ^{-2}}\) is smooth on \(\mathbb{R}\), is not identically zero, but *all* of its derivatives vanish at \(x=0\). So its Taylor series at \(x=0\) converges globally to \(0\)! Moving up to the complex plane illustrates the issue: \(f(z) = e ^{-z ^{-2}}\) is not complex analytic at \(z=0\).
These three examples are difficult to explain when our view is restricted to the real line only, but going up to the complex plane elucidates the underlying properties of each function that determine when and where we can expect their Taylor series to converge. Understanding this phenomenon is our goal for today.
The discussion will proceed in two steps: first, we’ll establish two computational theorems that allow us to explicitly determine the radius of convergence of a (complex) power series. Then, we’ll show that every complex analytic function is exactly equal to its Taylor series whenever the series converges, and that we can determine the radius of convergence by simply looking at where our function itself is analytic to begin with.
Hadamard’s Theorem and the Ratio Test
Let’s quickly recall the definition of the radius of convergence:
Definition 1.
Let \(\left\lbrace a_n \right\rbrace _{n=0}^{\infty}\) be a sequence of complex numbers. We say the series \(\sum _{n=0}^{\infty} a_n \left( z-z_0 \right)^n\) has a radius of convergence of \(R\) if the series converges absolutely whenever \(\left\lvert z-z_0 \right\rvert < R\) and diverges whenever \(\left\lvert z-z_0 \right\rvert > R\).
What’s a bit surprising is that for such a power series, the set of points on which it converges absolutely is always a disc! There are no power series that converge on differently shaped subsets of (\mathbb{C}). The reason is the following theorem:
Theorem 2. Hadamard's Theorem
Let \(\left\lbrace a_n \right\rbrace _{n=0}^{\infty}\) be a sequence of complex numbers, and let \[R = \liminf _{n\to\infty} \left\lvert a_n \right\rvert ^{-\frac{1}{n}}.\] Then the series \(\sum _{n=0}^{\infty} a_n z^n\) converges absolutely whenever \(\left\lvert z \right\rvert < R\) and diverges whenever \(\left\lvert z \right\rvert > R\).
One way to interpret this theorem is that for \(n\) large enough, \(\left\lvert a_n \right\rvert\) can be bounded from above approximately by \(R^{-n} \). So if \(\left\lvert z \right\rvert < R\), then the terms \(a_nz^n\) can be bounded by \(\left( \frac{\left\lvert z \right\rvert}{R} \right)^n\)! That is, we can compare the power series to a converging geometric series. A similar interpretation explains why the power series diverges whenever \(\left\lvert z \right\rvert > R\). In particular, this theorem shows that the set of points where any power series converges must be a disc, and nothing else.
Computing with \(\left\lvert a_n \right\rvert ^{-\frac{1}{n}}\) can be really tedious and difficult, and often times people defer to the ratio test. You’ve probably seen a weaker version of the theorem, but here it is:
Theorem 3. The Ratio Test
Let \(\left\lbrace a_n \right\rbrace _{n=0}^{\infty}\) be a sequence of complex numbers, and let \(R\) be the radius of convergence of \(\sum _{n=0}^{\infty} a_n z^n\). Then, \[\liminf _{n\to\infty} \left\lvert \frac{a _{n+1}}{a _{n}} \right\rvert \leq R \leq \limsup _{n\to\infty} \left\lvert \frac{a _{n+1}}{ a_n} \right\rvert.\]
When \(\lim _{n\to\infty} \left\lvert \frac{a _{n+1}}{a_n} \right\rvert\) exists, the \(\liminf\) and \(\limsup\) are exactly the same thing, and so this limit does produce the actual radius of convergence. The idea behind this theorem is the same: we are looking for a way to compare the coefficients \(a_n\) to \(R ^{-n}\).
Example 4.
Compute the radius of converge of the following series, using either the root test or the ratio test (if applicable).
- \(\sum _{n=0}^{\infty} \frac{1}{n!} \cdot z^n\).
- \(\sum _{n=0}^{\infty} \frac{1}{(2n)!} \cdot z ^{2n}\).
- \(\sum _{n=0}^{\infty} \frac{4^n}{n!+3n} \cdot z^n\).
Explicitly computing the radius of convergence using these two theorems really, really sucks. I would recommend against it if you have other options, which will be presented shortly. The second example above illustrates a glaring problem with the computational approach: using the root test can be really annoying, but the ratio test is completely useless in that example! The ratio test actually just tells us that \(0 \leq R \leq \infty\).
Complex Analytic Functions are Analytic
As mentioned earlier, complex analytic functions are really nice because they are always exactly equal to their Taylor series whenever the series converges. Moreover, it’s easy to figure out when the Taylor series converges: just look for a “pole”! To be more precise,
Theorem 5.
Let \(f\) be a complex analytic function, and suppose \(f\) is complex analytic at \(z_0\). Let \(\rho\) be the closest singularity of \(f\) to \(z_0\). Then, if \(R\) is the radius of convergence of \(\sum _{n=0}^{\infty} \frac{f ^{(n)}\left( z_0 \right)}{n!} \left( z-z_0 \right)^n\), \(R\) is the largest real number such that \(f\) extends to an analytic function on \(\left\lbrace z\in \mathbb{C} : \left\lvert z-z_0 \right\rvert < R \right\rbrace.\)
In words, the Taylor series of \(f\) centred at \(z_0\) converges in the largest possible disc before running into a singularity or another such issue.
Example 6.
Compute the radius of convergence of the Taylor series of \(f(x) = \frac{1}{1+x^2}\) centred at \(x_0 = 5\).
Example 7.
Compute the radius of convergence of the Taylor series of \(f(z) = \operatorname{Log}\left( 1+z \right)\) centred at \(z=-4+3i\).
There is a bit of a subtlety with the logarithm. Although the standard branch of \(\operatorname{Log}\) does not extend t the negative real axis, we can always consider another branch of \(\operatorname{Log}\) that does. The only “real” obstruction to the convergence of \(\operatorname{Log}\) is whenever the argument is zero (or infinite, I suppose).
Example 8.
Compute the radius of convergence of the Taylor series of \(f(z) = \operatorname{Log} \left( \frac{1}{e^z+1} \right)\) centred at the origin.
Consequences
One of the most important consequences of this fact — that complex analytic functions do agree with their Taylor series — is the concept of analytic continuation. It loosely states that if two analytic functions are “similar enough”, then they must be the same function. More precisely,
Theorem 9. Analytic Continuation
Suppose \(D\subseteq \mathbb{C}\) is a domain and \(f\) is analytic on \(D\). Let \(Z=\left\lbrace z\in D: f(z) = 0 \right\rbrace\) be the set of zeroes of \(f\). Then either \(Z\) contains no limit points in \(D\), or \(Z=D\) (i.e. \(f\) is identically zero).
What this is saying is that (f) cannot have a bunch of zeroes that accumulate inside (D). However, it is worth noting that these zeroes can accumulate on the boundary of (D), even if (f) is not identically zero.
Exercise 10.
Give an example of a domain \(D\) and a nonzero function \(f\) such that, for some sequence of complex numbers \(z_n\in D\) converging to a point on \(\partial D\), \(f\left( z_n \right) = 0\) for all \(n\).
Hint
So, if \(f\) and \(g\) are two analytic functions on a domain \(D\), then the set of points where \(f(z) = g(z)\) cannot accumulate inside \(D\) either — apply the theorem to \(f(z) - g(z)\)!
The rationale for the theorem is (very loosely) as follows: if \(f\) has “too many zeroes” inside \(D\), i.e. the zeroes accumulate to some \(z_0\), then the derivatives of \(f\) must all vanish at \(z_0\). But since \(f\) agrees with its power series at \(z_0\), and its power series would be identically zero, it would follow that \(f\) itself is identically zero!
Some Applications of Power Series
I probably won’t have time to get to this in class, but the ramifications of the existence of such power series are twofold.
The most immediate consequence is that we can properly define what it means to have a “zero of order \(n_0\)”. If \(f(z)\) is nonzero and has a zero at some point \(z_0\), then its power series must be of the form \(f(z) = \sum _{n=n_0}^{\infty} a_n \left( z-z_0 \right)^n\) for some \(n_0 > 0\) and \(a _{n_0}\neq 0\). That is, \[f(z) = \left( z-z_0 \right) ^{n_0}\cdot \sum _{n=0}^{\infty} a _{n+n_0}z^n,\] and \(\frac{f(z)}{\left( z-z_0 \right) ^{n_0}}\) is analytic and nonzero at \(z_0\). \(n_0\) represents the order of the zero at \(z_0\), and it says that we can “divide out the zero” with a polynomial factor. This is not true in the real setting: consider \(f(x) = x ^{\frac{1}{3}}\), which has a zero at \(x=0\) that *can’t* be divided out by a polynomial factor.
Likewise, if \(f(z)\) has a “vertical asymptote” at \(z=z_0\), then there may exist some integer \(n_0\) such that \(\left( z-z_0 \right) ^{n_0}f(z)\) is analytic and nonzero at \(z_0\). In this scenario, we say \(f\) has a pole of order \(n_0\) at \(z_0\), meaning that we can neutralise the singularity with a polynomial factor. In particular, we can express the power series of \(f\) as \[f(z) = \sum _{n=-n_0}^{\infty} a_n \left( z-z_0 \right)^n,\] which includes terms of the form \(\frac{1}{(z-z_0)^n}\)!
Both of these facts will come into play when we talk about residues — these power series expansions can be used in conjunction with Cauchy’s integral formulae to integrate functions with non-polynomial vanishing denominators.