Hunter Liu's Website

6. Continuity

≪ 5. Convergence of Series | Table of Contents

We’ve built up a lot of theory surrounding limits, sequences, and series, but calculus concerns itself with differentiable and integrable functions. You may remember from aeons ago that you first dipped your toes into continuous functions before describing these two classes of functions, and we’ll be laying the foundations there as well.

The heuristic that’s often repeated is, “A continuous function is one whose graph can be drawn with a single pen stroke.” I humbly request that you wipe this image of a false promise from your mind. The definition of continuity does not impose restrictions on the graph of a function and how practical it is to draw.

There are two definitions that you need to have committed to memory some time before the midterm exam:

Definition 1. (Sequential) Continuity

Let \(D\subseteq \mathbb{R}\), and \(f: D\to \mathbb{R}\). \(f\) is continuous at a point \(x_0\in D\) if, for all sequences \(\left\lbrace x_n \right\rbrace\subseteq D\) converging to \(x_0\), \(f\left( x_n \right)\to f\left( x_0 \right)\). \(f\) is continuous if it’s continuous at each point in \(D\).

Definition 2. (\(\epsilon \)-\(\delta \)) Continuity

Let \(D\subseteq \mathbb{R}\) and \(f: D\to \mathbb{R}\). \(f\) is continuous at a point \(x_0\in D\) if for all \(\epsilon > 0\), there exists \(\delta > 0\) such that \[\left\lvert x-x_0 \right\rvert \implies \left\lvert f(x) - f\left( x_0 \right) \right\rvert < \epsilon .\] \(f\) is continuous if it’s continuous at each point in \(D\).

In class, you’ve proved that the two above definitions are actually equivalent to each other. These two definitions, though equivalent, are useful in generally different contexts, and our goal today is to give a few examples of when to use one over the other and vice-versa.

By the way, a heuristic to have in mind is that a function \(f\) is continuous at a point \(x_0\) if “\(x\approx x_0\) implies \(f(x) \approx f \left( x_0 \right)\)”. In the \(\epsilon\)-\(\delta \) definition, \(\delta \) quantifies the first \(\approx\) while \(\epsilon \) quantifies the second \(\approx\).

In general, you should try your hardest to avoid using the definitions directly when you’re trying to prove that a function is continuous. Instead, for most “ordinary” functions you may run into, you should appeal to the limit laws for sequences or the fact that products, sums, etc. of continuous functions remain continuous.

Example 3.

Prove that the function \(f(x) = x^3-2x+7\) is continuous on \(\mathbb{R}\).

Solution
Since \(x\) is continuous on \(\mathbb{R}\), \(x^3 = x\cdot x\cdot x\) is continuous on \(\mathbb{R}\). Since \(2\) and \(7\) are continuous functions on \(\mathbb{R}\) (viewing them as the constant functions), \(2\times x\) and \(7\) are continuous functions on \(\mathbb{R}\). Finally, \(x^3 + 2x + 7\) is continuous, since it’s the sum of continuous functions. \(\square\)

However, in other scenarios, such simple arguments do not suffice.

Problem 4.

Define the piecewise function \[f(x) = \begin{cases} 2x + 1 & x > 0, \\ -3x+1 & x \leq 0. \end{cases} \] Prove \(f\) is continuous at \(0\).

Solution

We wish to show that if \(x \approx 0\), then \(f(x) \approx f(0) = 1.\)

Let \(\epsilon > 0\). Notice that \[\left\lvert f(x) - 1 \right\rvert = 2 \left\lvert x \right\rvert\] when \(x > 0\) and that \[\left\lvert f(x) - 1 \right\rvert = 3 \left\lvert x \right\rvert\] when \(x \leq 0\). Therefore, if \(\left\lvert x \right\rvert < \frac{\epsilon }{3}\) (here we are quantifying “\(x\approx 0\)”), then \(\left\lvert f(x) -1\right\rvert<\frac{2 \epsilon}{3} <\epsilon\) when \(x > 0\) and \(\left\lvert f(x)-1 \right\rvert<\epsilon \) when \(x \leq 0\). In any case, \(\left\lvert x-0 \right\rvert < \frac{\epsilon }{3}\) implies \(\left\lvert f(x) - f(0) \right\rvert < \epsilon \). Taking \(\delta = \frac{\epsilon }{3}\) concludes the proof. \(\square\)

You can absolutely try to prove this using sequences. However, you will end up with a bit of a headache since your sequence of points \(x_n\) may go between either side of \(0\) infinitely often, and your proof will almost surely degenerate into the same argument we gave above.

As a rule of thumb, directly proving that a function is continuous is best done using the \(\epsilon \)-\(\delta \) definition rather than the sequences. Of course there are exceptions — God gave us four fingers, after all.

Proving that a function is discontinuous is a whole nother beast. Consider the negations of the two definitions above:

A function \(f : D\to \mathbb{R}\) is discontinuous at a point \(x_0\in D\) if there exists a sequence \(\left\lbrace x_n \right\rbrace\subseteq D\) converging to \(x_0\) such that \(f\left( x_n \right)\) does not converge to \(f\left( x_0 \right)\).

Alternatively, it is discontinuous at \(x_0\) if there exists \(\epsilon > 0\) such that for all \(\delta > 0\), there exists \(x\in D\) satisfying \(\left\lvert x - x_0 \right\rvert < \delta \) while \(\left\lvert f(x) - f\left( x_0 \right) \right\rvert \geq \epsilon \).

Yeah that latter negation is really annoying to work with, while the former is pretty reasonable. The point is, to prove a function is discontinuous, you only need to exhibit a single sequence of points with the right limits. To illustrate this, consider the example

Problem 5.

Prove that the function \[f(x) = \begin{cases} \sin \left( \frac{1}{x} \right) & x \neq 0 \\ 0 & x = 0 \end{cases} \] is discontinuous at \(x= 0\).

Solution
Let \(x_n = \frac{1}{\frac{\pi }{2} + 2n \pi }\). We observe \(x_n \to 0\) (this is provable in finite time). However, \(f \left( x_n \right) = \sin \left( \frac{\pi }{2} + 2n \pi \right) = 1\) for all \(n\), so \(f \left( x_n \right) \to 1\) as \(n\to\infty\). But \(f(0) = 0\), so \(f\left( x_n \right)\not\to f(0)\) despite \(x_n \to 0\). We conclude \(f\) is discontinuous at \(0\). \(\square\)

The picture for the proof is simple: the peaks of \(\sin\) begin accumulating near \(0\), and we’re taking a sequence of “peaks” to get to \(0\). These peaks maintain a value of \(1\).

Here’s one last example that illustrates the differing utility of our two definitions, as well as demonstrating the uselessness of the “pen stroke heuristic” mentioned at the beginning.

Example 6.

Let \(f: \mathbb{R}\to \left[ 0,1 \right]\) be defined as \[f(x) = \begin{cases} \frac{1}{q} & x\in \mathbb{Q} \textrm{ and } x = \frac{p}{q} \textrm{ in reduced form, } \\ 0 & x\notin \mathbb{Q}. \end{cases}\] Show that \(f\) is continuous at \(x_0\) if \(x_0\) is irrational and that \(f\) is discontinuous at \(x_0\) if \(x_0\) is rational.

This is a well-known function of many names; Wikipedia calls it Thomae’s function. Looking at the function’s definition, one might expect it to be discontinuous everywhere. After all, its graph is a collection of points scattered in a loosely triangular shape, with jumps away from zero at every single rational point. With all those discontinuities, there’s no way it can be continuous anywhere.

Continuity at Irrationals

We’ll use the \(\epsilon \)-\(\delta \) definition. One can certainly make the sequential definition work, but it’s going to be a lot.

Let \(x_0\) be irrational and \(\epsilon > 0\). We need to find a \(\delta > 0\) such that for all \(x\in \mathbb{R}\) such that \(\left\lvert x - x_0 \right\rvert < \delta \), then \(\left\lvert f(x) - f \left( x_0 \right) \right\rvert = f(x) < \epsilon .\)

If \(x\) is irrational as well, then \(f(x) = 0 < \epsilon \) is trivial. Thus, we need to concern ourselves with rational points \(x = \frac{p}{q}\) (in reduced form), at which \(f(x) = \frac{1}{q}\). We want \(\frac{1}{q} < \epsilon \).

Let \(Q \in \mathbb{N}\) such that \(\frac{1}{Q} < \epsilon \). The idea is that if \(x\) is a rational number whose denominator (in lowest terms) is bigger than \(Q\), then \(f(x) < \frac{1}{Q} < \epsilon \). There aren’t too many rational numbers where this isn’t the case, so we can choose \(\delta \) wisely to force this to happen.

Specifically, define \(\delta = \min \left\lbrace \left\lvert \frac{p}{q} - x_0 \right\rvert : q \leq Q \right\rbrace.\) This minimum exists because there are only finitely many such \(\frac{p}{q}\) within a unit of \(1\) away from \(x\), and each of those distances is strictly positive (otherwise, \(x_0\) would be rational).

If \(x\in \mathbb{R}\) and \(\left\lvert x - x_0 \right\rvert < \delta \), then either \(x\) is irrational, so \(\left\lvert f(x) - f\left( x_0 \right) \right\rvert = 0 < \epsilon \), or \(x = \frac{p}{q}\) (in lowest terms) is rational,
forcing \(q > Q\) by construction, hence \[\left\lvert f(x) - f\left( x_0 \right) \right\rvert = \frac{1}{q} < \frac{1}{Q} < \epsilon.\] By definition, \(f\) is continuous at \(x_0\).

Discontinuity at Rationals
Let \(x_0\in \mathbb{Q}\). Define \(x_n = x_0 + \frac{\pi }{n}\). Clearly \(x_n\notin \mathbb{Q}\) for any \(n\), and \(x_n\to x_0\) as \(n\to\infty\). However, this means \(f\left( x_n \right) = 0\) for all \(n\), hence \[f\left( x_n \right)\to 0 \neq f\left( x_0 \right).\] It follows that \(f\) is discontinuous at \(x_0\).