Hunter Liu's Website

8. Derivatives and Local-to-Global Correspondences

≪ 7. Limits of Functions | Table of Contents | 9. Integration and Darboux Sums ≫

Our focus today will be somewhat philosophical. Suppose I told you that \(f : [-1, 1]\to \mathbb{R}\) was a continuous function at \(0\) and \(f(0) = 0\). Is there anything meaningful you can say about the function \(f\)? Unfortunately not really — continuity is a “local” statement in the sense that it only contains information about \(f\) at points infinitesimally close to \(0\). You cannot for instance predict how large \(\left\lvert f \left( 10 ^{-100} \right) \right\rvert\) is.

Likewise, if I told you \(f\) was also differentiable at \(0\) and that \(f’(0) = 0\), you still cannot say anything meaningful or even quantitative about \(f\). You are still only exposed to some infinitesimal amount of data.

However, if I told you that \(f\) was continuous on its entire domain \([-1, 1]\), you can say some qualitatively useful statements about \(f\): it achieves a global maximum and minimum somewhere (the extreme value theorem); its image is an interval (the intermediate value theorem); uhm I think that’s it. While you still can’t pin down quantitative information about any one point of \(f\), somehow you can put together a lot of infinitesimal data into a global statement about your function. This is an example of a “local-to-global correspondence”.

Let’s now analyse what information we can extract from derivatives, beginning with the definition.

Definition 1.

Let \(a < b\) be real numbers or \(\pm\infty\) and suppose \(f : (a, b) \to \mathbb{R}\) is a function. \(f\) is differentiable at \(x_0 \in (a, b)\) if the limit \[\lim _{x\to x_0} \frac{f(x) - f \left( x_0 \right)}{x - x_0}\] exists as a real number. This limit is denoted \(f’ \left( x_0 \right)\), the derivative of \(f\) at \(x_0\).

If \(f’ \left( x_0 \right)\) exists for all \(x_0 \in (a, b)\), then \(f\) is differentiable on \((a, b)\).

Here, we assume that the domain of our function is an interval so we don’t have to worry about all the terrible domaian issues when dealing with limits of functions.

Let’s first consider the following question:

Question 2.

Let \(f : \mathbb{R} \to \mathbb{R}\) be a function. Suppose \(f’(0)\) exists and \(f’(0) > 0\). Does this mean \(f\) is “increasing near \(0\)”? More precisely, is it necessarily true that there exists an open interval \(I\) containing \(0\) so that for every \(x < y\in I\), one has \(f(x) < f(y)\)?

It may be tempting to say, “yeah”. But consider the following example:

Exercise 3.

Let \(f : \mathbb{R}\to \mathbb{R}\) be defined as \[f(x) = \begin{cases} x & x\in \mathbb{Q}, \\ x + x^2 & x \notin \mathbb{Q}. \end{cases}\] Show that:

  1. \(f\) is not increasing on any open interval containing \(0\).
  2. \(f\) is differentiable at \(0\), and \(f’(0) = 1\).
  3. \(f\) is not differentiable anywhere else.

A quick remark before divulging the solution: as with proving that functions are or aren’t continuous, and as with proving that continuous limits do or don’t exist, one should generally use \(\epsilon \)’s and \(\delta \)’s when proving that something is differentiable and one should construct sequences to disprove it.

Solution

Let \(I\) be any open interval containing \(0\). Let \(x > 0\) be any irrational number in \(I\). Then \(f(x) = x + x^2\). There exists a rational number \(q\in I\) such that \(x < q < x + x^2\) — there is a rational number between any two real numbers — and since \(f(q) = q < x + x^2 = f(x)\) while \(x < q\), we conclude that \(f\) is not increasing on \(I\).

Draw a picture! The graph of \(f\) is a parabola superpositioned on a line, and the parabola lies above the line everywhere.

To show that \(f\) is differentiable at \(0\), we should break out some \(\epsilon \)’s and \(\delta \)’s. We wish to show that the limit \[\lim _{h\to 0} \frac{f(h)}{h} \] exists (note that \(f(0) = 0\)). We observe that \[\frac{f(h)}{h} = \begin{cases} 1 & h \in \mathbb{Q}, h\neq 0, 1 + h & h\notin \mathbb{Q}. \end{cases}\] Thus, for any \(\epsilon > 0\), if \(0 < \left\lvert h \right\rvert < \epsilon \), then \(\left\lvert \frac{f(h)}{h} - 1 \right\rvert \leq \left\lvert h \right\rvert < \epsilon.\) We conclude by definition that \(f’(0) = 1 > 0\).

Finally, to show that \(f\) is not differentiable anywhere else, we let \(x_0 \in \mathbb{R}\), \(x_0 \neq 0\), and consider two cases. If \(x_0 \in \mathbb{Q}\), let \(\left( x_n \right) _{n\in \mathbb{N}}\) be a sequence of irrational numbers such that \(x_n\to x_0\). Then, \[\lim _{n\to\infty} \frac{f \left( x_n \right) - f \left( x_0 \right)} {x_n - x_0}\] does not exist because the numerator equals \(x_n + x_n^2 - x_0\to x_0^2 \neq 0\) while the denominator tends to \(0\)! A similar argument applies when \(x_0 \notin \mathbb{Q}\), just make the \(x_n\)’s rational. \(\square \)

The point here is that a function need not resemble its derivative. The infinitesimal local information about \(f’(0)\) cannot be used to say anything meaningful about the function.

However, if you have some more information about the derivatives of \(f\) on its whole domain, you can stitch them together to make a global statement.

Problem 4.

Show that if \(f : (a, b) \to \mathbb{R}\) is differentiable on \((a, b)\) and \(f’(x) > 0\) for all \(x\in (a, b)\), then \(f\) is necessarily increasing.

Is the converse true? That is, if \(f\) is increasing on \((a, b)\) and differentiable everywhere, does it follow that \(f’(x) > 0\) for all \(x\)? If it’s false, how can you fix it?

Hint
Use the mean value theorem.

Solution

Suppose towards a contradiction that \(f\) is not increasing: that is, there exist two points \(x < y\) in \((a, b)\) such that \(f(x) \geq f(y)\). Then, by the mean value theorem, there exists a point \(c\) between \(x\) and \(y\) such that \(f’(c) = \frac{f(y) - f(x)}{y-x}\). The numerator is nonnegative while the denominator is positive, hence \(f’(c) \leq 0\). This contradicts the derivative being positive everywhere.

The converse, however, is false: consider \(f(x) = x^3\).

The failure of the strict inequality \(f’(x) > 0\) is because of the limit. We have \[f’(x) = \lim _{h\to 0} \frac{f(x + h) - f(x)}{h}.\] Since \(f\) is increasing, if \(h\) is positive, then the numerator is positive. If \(h\) is negative, then the numerator is negative. In either case, the signs on both sides of the fraction match, and we’re taking the limit of something strictly positive. But the limit of positive numbers may still be zero! Hence if \(f\) is increasing, one can only conclude that \(f’(x) \geq 0\) for all \(x\).

(One can in fact strengthen this slightly to say that \(f’(x) > 0\) “almost everywhere”: you can’t have too many points where \(f’(x) = 0\).)

Sceptics of the pre-preceeding problem may believe that I am cheating by cooking up a function that’s differentiable at exactly one point. But in fact that’s not a problem at all:

Problem 5.

Let \(f : \mathbb{R}\to \mathbb{R}\) be defined as \[f(x) = \begin{cases} 0 & x = 0, \\ \frac{x}{2} + x^2 \cos \left( \frac{1}{x} \right) & x \neq 0. \end{cases}\] Show that \(f\) is differentiable everywhere and that \(f’(0) > 0\).

Be warned that the product rule does not work when \(x = 0\).

Solution

When \(x\neq 0\), we can apply the product rule and chain rule to show that \(f\) is differentiable.

When \(x = 0\), though, we have to resort to using the limit. We have that (since \(f(0) = 0\)) \[f’(0) = \lim _{h\to 0} \frac{f(h)}{h} = \lim _{h\to 0} \left( \frac{1}{2} + \left\lvert h \right\rvert \cos \left( \frac{1}{h} \right) \right).\] Since \(\left\lvert \cos \left( \frac{1}{h} \right) \right\rvert \leq 1\) for all \(h\neq 0\), it follows (e.g. by the squeeze theorem) that \(\lim _{h\to 0} \left\lvert h \right\rvert \cos \left( \frac{1}{h} \right) = 0\). Using the limit law of addition, we conclude that \(f’(0) = \frac{1}{2}\). \(\square\)

It’s true that \(f\) is not increasing in any interval containing \(0\). The product rule does apply to nonzero points, and one can compute that \[f’(x) = \begin{cases} \frac{1}{2} & x = 0, \\ \frac{1}{2} + 2x\cos \left( \frac{1}{x} \right) + \sin \left( \frac{1}{x} \right) & x \neq 0. \end{cases}\] If \(x = \frac{1}{2\pi n - \frac{pi}{2}}\) for any \(n\), then \(f’(x) = -\frac{1}{2}\)! The \(\sin\) term becomes \(-1\) while the \(\cos\) term becomes \(0\). Thus \(f’\) is negative at many points close to \(0\). If \(f\) was increasing on an interval \(I\) containing \(0\), then we showed that \(f’(x) \geq 0\) on \(I\)! Thus it’s impossible for \(f\) to be increasing on any interval containing \(0\).