Hunter Liu's Website

12. Some Final Practise Problems

≪ 11. A Generalisation of the Implicit Function Theorem | Table of Contents

I put together a handful of practise problems for the upcoming final. You can access them in pdf format if you want, but I’ll provide all the problems with some solutions and commentary in web format below.

Problem 1. The Babylonian Method

Define the sequence of functions \(\left\lbrace f_n(x) \right\rbrace _{n=0}^{\infty}\) on \(\left[ 1,\infty \right)\) by \(f_0(x) = x\) and \[f _{n+1}(x) = \frac{1}{2}\left( f_n(x) + \frac{x}{f_n(x)} \right).\] Show that for any compact set \(K\subseteq \left[ 1,\infty \right)\), \(\left\lbrace f_n(x) \right\rbrace\) converges uniformly on \(K\). Does \(\left\lbrace f_n(x) \right\rbrace\) converge uniformly on \(\left[ 1,\infty \right)\)?

Hint
Try applying Dini’s theorem. How can you verify the conditions?
Outline of Proof

I’ll only give an outline of the proof, as I think the details of each step are relatively straightforward. As stated in the above hint, we should attempt to use Dini’s theorem.

  1. Show that for any fixed \(x\), one has \(f_n(x) \geq \sqrt x\) for all \(x\). This can be done by induction and computing the derivative of the function \(g(y) = \frac{1}{2}\left( y+\frac{x}{y} \right)\).
  2. For any fixed \(x\), use the contraction mapping principle to show that \(f_n(x) \to \sqrt x\) as \(n\to\infty\).
  3. Show that for any fixed \(x\), the sequence \(\left\lbrace f_n(x) \right\rbrace _{n=1}^{\infty}\) is nonincreasing. This can be done with another inductive argument.

These three steps set up the conditions of Dini’s theorem, and from there one gets convergence on compact subsets.

To see that the sequence does not converge uniformly on \(\left[ 1,\infty \right)\), one should show (again using an inductive argument) that \[\lim _{x\to\infty} f_n(x) = 2 ^{-n}\] for all \(n\). Morally speaking, this says that \(f_n\) grows like a linear function of slope \(2 ^{-n}\), which (for fixed \(n\)) grows much faster than \(\sqrt x\) in the long run.

Some Remarks

This is based on the Babylonian method for computing squareroots, also known as Heron’s method. One way of interpreting it is that it’s just running Newton’s method on the function (\sqrt x).

The point of this problem is (partially) to remind everyone about the conditions of Dini’s theorem, particularly the condition that the domain is compact. This example is an example of Dini’s theorem failing when the domain is not compact!

Problem 2. Orthogonality of Even and Odd Functions

  1. Let \(f: \mathbb{R}\to \mathbb{R}\) be continuous, and suppose that for every odd function \(\phi \in C _{0}^{\infty}(\mathbb{R})\) (i.e., \(\phi (x) = - \phi (-x)\) for all \(x\)) one has \[\int f(x) \phi (x) dx = 0.\] Prove that \(f(x) = f(-x)\) for all \(x\).
  2. Let \(f\in C _{0}^{\infty}(\mathbb{R})\), and suppose one has \[\int f(x) \exp \left( ax^2 \right) dx = 0\] for all \(a\in \mathbb{R}\). Show that \(f\) is odd, i.e. \(f(x) = -f(-x)\) for all \(x\).

Hint
For part 2, perform a change of variables to show that \[\int _{0}^{\infty} \left( f(x)+f(-x) \right) \exp \left( ax^2 \right) dx = 0\] for all \(a\in \mathbb{R}\). Then, argue that this implies \(f(x)+f(-x) = 0\) for all \(x\).
Solution (Part 1)

For part 1, the idea is that the integral can “detect” whenever \(f(x) \neq f(-x)\). To make precise what this means, we have for any odd function \(\phi \in C _{0}^{\infty}(\mathbb{R})\) that \[\int _{-\infty}^{\infty} f(x) \phi (x) dx = \int _{0}^{\infty} \left( f(x) - f(-x) \right) \phi (x) dx.\] This can be done by breaking up the integral into positive and negative domains, then performing a change of variables to get \[\int _{-\infty}^{0} f(x) \phi (x) dx = - \int _{\infty}^{0} f(-x) \phi (-x) dx = - \int _{0}^{\infty} f(-x) \phi (x) dx.\] Thus, we may reframe the assumptions to saying that \[\int _{0}^{\infty} \left( f(x)-f(-x) \right) \phi (x) dx = 0\] for any \(\phi \in C _{0}^{\infty} \left( \left[ 0,\infty \right) \right)\) satisfying \(\phi (0)=0\) (this is necessary by the odd-ness of \(\phi \)).

If \(f(y)-f(-y) \neq 0\) for some fixed \(y>0\) (this cannot happen for \(y=0\)), then \(f(x)-f(-x)\neq 0\) for \(x\in \left( y-\epsilon , y + \epsilon \right)\) with \(\epsilon \) sufficiently small. Take \(\phi \) to be a nonnegative bump function supported on this interval, and this yields a contradiction. Thus \(f(y) -f(-y) = 0\) for all \(y > 0\), and the claim follows.

Solution (Part 2)

We would like to make the same argument as in part 1; performing the same change of variables and using the fact that \(\exp \left( ax^2 \right)\) is even for any \(a\in \mathbb{R}\), we have \[\int _{0}^{\infty} \left( f(x)+f(-x) \right)\exp\left( ax^2 \right) dx = 0\] for all \(a\in \mathbb{R}\). Fix some \(M>0\) such that the support of \(f\) is contained in \(\left[ -M, M \right]\). Then, truncating the integral yields \[\int _{0}^{M} \left( f(x)+f(-x) \right)\exp\left( ax^2 \right) dx = 0\] for all \(a\in \mathbb{R}\).

We now apply Stone-Weierstrass. Let \(A\) be the set of functions of the form \[\phi (x) = \sum _{n=1}^{N} c_n \exp \left( a_nx^2 \right),\] where \(N\) is a natural number and \(a_n,c_n\in \mathbb{R}\). \(A\) contains all constant functions, it is a real subalgebra of \(C \left( \left[ 0,M \right] \right)\), and it separates points when restricted to \(\left[ 0,M \right]\). Thus, by Stone-Weierstrass, \(A\) is dense in \(C\left( \left[ 0,M \right] \right)\).

Note that \(\int _{0}^{M} \left( f(x)+f(-x) \right)\phi (x) dx = 0\) for all \(\phi \in A\) by assumption and by the linearity of the integral. But at the same time, we have for any \(\psi \in C \left( \left[ 0, M \right] \right)\) and any \(\epsilon > 0\) there exists some \(\phi _ \epsilon \in A\) such that \(\sup _{x\in \left[ 0,M \right]} \left\lvert \psi (x) - \phi _ \epsilon (x) \right\rvert < \epsilon \). Then, we get \[\int _{0}^{M} \left( f(x)+f(-x) \right) \psi (x) dx = \int _{0}^{M} \left( f(x)+f(-x) \right) \left( \psi (x) - \phi _ \epsilon (x) \right) dx,\] and this is bounded in magnitude by \[\epsilon \cdot \int _{0}^{M} \left\lvert f(x)+ f(-x) \right\rvert dx.\] Taking \(\epsilon \to 0\) shows that \(\int _{0}^{M} \left( f(x)+f(-x) \right) \psi (x) dx = 0\) for all \(\psi \in C \left( \left[ 0, M \right] \right)\). By the argument in part 1, it follows that \(f(x) + f(-x) = 0\) for all \(x\in \left[ 0, M \right]\).

Remark

The point of this problem is primarily to present a somewhat surprising application of Stone-Weierstrass.

More broadly, there is a recurring theme in analysis wherein some properties or quantities defined in terms of integrals are only well-understood for a certain class of functions (e.g., on smooth functions). In order to study these properties on a broader class of functions, one must perform some kind of approximation. Both mollifiers and this application of Stone-Weierstrass are great examples of this theme!

A second less significant point of this problem is that the space of continuous functions can be decomposed into an even and odd part: given a function \(f\), \(\frac{1}{2} \left( f(x)+f(-x) \right)\) and \(\frac{1}{2}\left( f(x)-f(-x) \right)\) are even and odd functions respectively that sum to \(f\). Moreover, this is an “orthogonal decomposition” with respect to the inner product \[\left\langle f,g \right\rangle=\int f(x) g(x) dx.\] This problem demonstrates the “orthogonality” of this decomposition!

Problem 3. Weierstrass M-test for Products

Let \(\left\lbrace f_n \right\rbrace\) be a sequence of bounded continuous functions \(\mathbb{R}\to \mathbb{R}\). Let \(M_n = \sup _{x\in \mathbb{R}} \left\lvert f_n(x) \right\rvert\), and suppose \(\sum _{n=1}^{\infty} M_n\) converges. Prove that \[F(x) = \prod _{n=1}^{\infty} \left( 1+f_n(x) \right) = \lim _{N\to\infty} \prod _{n=1}^{N} \left( 1+f_n(x) \right)\] converges for each \(x\) and is continuous.

Solution

The setup of the problem (and the title I added) should be reminiscent of the Weierstrass M-test; however, this deals with a product rather than a sum. We should therefore take logarithms to convert the product into a sum!

First, we may assume without loss of generality that \(M_n < 1\) for all \(n\). In order for \(\sum _{n=1}^{\infty} M_n < \infty\), there must be some \(N_0\) such that \(n \geq N_0\) implies \(M_n < 1\). Then, we get \[F(x) = \prod _{n=1}^{N_0 - 1} \left( 1+ f_n(x) \right)\cdot \prod _{n=N_0}^{\infty} \left( 1+f_n(x) \right).\] From here, we need only show that the latter infinite product is continuous, as the first term is a finite product of continuous functions and hence continuous.

The point of this rediction is that \(M_n < 1\) for all \(n\) implies \(1+f_n(x) > 0\) for all \(n\) and \(x\). This allows us to actually take logarithms. We consider \[\log \left( \prod _{n=1}^{N} \left( 1+f_n(x) \right) \right) = \sum _{n=1}^{N} \log \left( 1+f_n(x) \right).\] Taking a second-order Taylor expansion of \(\log(1+y)\) around \(y=0\) yields \[\log \left( 1+y \right) = y - \frac{1}{2}y^2 \cdot \frac{1}{(1+\xi) ^2},\] where \(\xi \) is some number between \(0\) and \(y\). Hence, we get for each \(n\) and every \(x\) that \[\left\lvert \log \left( 1+f_n(x) \right) \right\rvert \leq M_n + \frac{1}{2} M_n^2 \cdot \frac{1}{\left( 1-M_n \right)^2}.\] This is summable: we know by assumption that \(\sum _{n=1}^{\infty} M_n < \infty\). Moreover, eventually \(M_n < \frac{1}{2}\), so eventually \(\frac{1}{\left( 1-M_n \right)^2} < 2\). Thus \(\sum \frac{1}{2}M_n^2 \cdot \frac{1}{\left( 1-M_n \right)^2}< \infty\), where here \(M_n < 1 \implies M_n^2 < M_n\) justifies the summability of \(M_n^2\).

Hence, \[\log\left( \prod _{n=1}^{N} \left( 1+f_n(x) \right) \right) = \sum _{n=1}^{N}=log \left( 1+f_n(x) \right)\] converges to a continuous function as \(N\to\infty\) by the Weierstrass M-test. Since \(\exp\) is continuous everywhere, we have that \[\exp \left( \sum _{n=1}^{N} \log \left( 1+f_n(x) \right) \right) = \prod _{n=1}^{N} \left( 1+f_n(x) \right)\] converges to a continuous function as \(N\to\infty\), as desired.

Remark
The point of this problem was, on one hand, to remember that the Weierstrass M-test is overpowered, but on the other hand, to demonstrate an application of Taylor expansions. The Taylor expansion is a critical step here, as one needs to verify the summability of \(\sup \left\lvert \log \left( 1+f_n(x) \right) \right\rvert\). This supremum is NOT going to be \(\log \left( 1+M_n \right)\); rather, it will be controlled by \(\left\lvert\log \left( 1-M_n \right)\right\rvert\), and this ends up being greater than \(M_n\)!

Problem 4.

Let \(0 < \alpha \leq 1\). Recall the space of \(\alpha \)-Hölder continuous functions on \(\left[ 0,1 \right]\), denoted \(C^\alpha \left( \left[ 0,1 \right] \right)\), the set of continuous functions \(f:\left[ 0,1 \right]\to \mathbb{R}\) satisfying \[\left\lVert f \right\rVert _{C^\alpha }:= \sup _{x\in \left[ 0,1 \right]} \left\lvert f(x) \right\rvert + \sup _{x\neq y} \frac{\left\lvert f(x)-f(y) \right\rvert}{ \left\lvert x-y \right\rvert ^\alpha }< \infty.\] Let \(S = \left\lbrace f\in C^ \alpha \left( \left[ 0,1 \right] \right) : f(0) = 0 \right\rbrace\). Show that \(S\) is a closed subset of \(C^ \alpha \left( \left[ 0,1 \right] \right)\). Is \(S\) compact?

Remark
The problem as stated is kind of a triviality. I completely botched the problem statement to tell you the truth. I think something more appropriate would be, let \[S = \left\lbrace f \in C ^{\alpha }\left( \left[ 0,1 \right] \right) : \left\lVert f \right\rVert _{C^ \alpha } \leq 1, f(0) = 0 \right\rbrace.\] Show that \(S\) is a compact subset of \(C \left( \left[ 0,1 \right] \right)\) with respect to the uniform norm. The point is to illustrate that boundedness at a single point plus equicontinuity is enough to use Ascoli’s theorem.

Problem 5.

Let \(\mathbb{H} = \left\lbrace \left( x,y \right)\in \mathbb{R}^2 : x, y \geq 0, x^2+y^2 \leq 1 \right\rbrace.\) Prove that for any \(\epsilon > 0\) and any continuous function \(f: \mathbb{H}\to \mathbb{R}\) there exists a function \(g\left( x,y \right)\) of the form \[g\left( x,y \right)=\sum _{m=0}^{N}\sum _{n=0}^{N} a _{mn} x ^{2m} y ^{2n}\] for some natural number \(N\) and real numbers \(a _{mn}\) satisfying \[\sup _{(x, y)\in \mathbb{H}} \left\lvert g\left( x,y \right)-f\left( x,y \right) \right\rvert < \epsilon .\] Does the result still hold if \(\mathbb{H}\) is replaced by \[\mathbb{D} = \left\lbrace (x, y)\in \mathbb{R}^2 : x^2+y^2 \leq 1 \right\rbrace ?\]

Partial Solution

You should recognise this as applying Stone-Weierstrass to \(C\left( \mathbb{H}; \mathbb{R} \right)\)! Verify that the conditions are met.

The result no longer holds when \(\mathbb{H}\) is replaced with \(\mathbb{D}\). Consider \(f(x, y) = x\). Let \(\epsilon = \frac{1}{2}\), and suppose a \(g\left( x,y \right)\) of the given form exists satisfying \[\sup _{(x,y)\in \mathbb{D}}\left\lvert f(x,y)-g(x, y) \right\rvert < \epsilon .\] Let \(A = g\left( \frac{1}{2}, 0 \right) = g\left( -\frac{1}{2}, 0 \right)\). Then, since \(f\left( \frac{1}{2},0 \right)=\frac{1}{2}\) and \(f\left( -\frac{1}{2},0 \right)=-\frac{1}{2}\), we have \[\left\lvert \frac{1}{2}-A\right\rvert < \frac{1}{2} \textrm{ and } \left\lvert -\frac{1}{2}-A \right\rvert < \frac{1}{2},\] which is impossible.

In particular, Stone-Weierstrass fails because the subalgebra of “even polynomials” in \(x\) and \(y\) no longer separates points.

Problem 6. UCLA Basic Exam, Fall 2021

Let \(a_n\) be any sequence of real numbers. Show that the sequence of functions \(f_n: \left[ 0,1 \right]\to \mathbb{R}\) defined by \[f_n(x) = \int _{0}^{x} \exp \left( t^8-6\cos^2 \left( a_nt \right) \right) dt\] has a subsequence that converges uniformly on \(\left[ 0,1 \right]\).

Solution

We have for any \(x\) that \[\left\lvert f_n’(x) \right\rvert = \exp \left( x^8-6\cos^2\left( a_nx \right) \right) \leq e\] by the fundamental theorem of calculus. Here, we used the monotonicity of \(\exp\) and the fact that \(x\in \left[ 0,1 \right]\) to get this upper bound.

Note that \(f_n(0)=0\) for all \(n\). By the mean value theorem, we have for all \(x\) and \(y\) that \[\left\lvert f_n(x)-f_n(y) \right\rvert \leq e \cdot \left\lvert x-y \right\rvert\] for any \(n\). In particular, taking \(y=0\) yields \[\left\lvert f_n(x) \right\rvert \leq e \cdot \left\lvert x \right\rvert \leq e\] for all \(n\) and for all \(x\in \left[ 0,1 \right]\). Since \(\left\lbrace f_n \right\rbrace\) is equicontinuous and uniformly bounded, we conclude by Ascoli’s theorem that it has a uniformly converging subsequence.

Remark

Problems like this may look scary, but the complicated integrand is an intimidation tactic. Ascoli’s theorem is pretty much the only tool that can give uniformly converging subsequences of functions, so one should expect some kind of equicontinuity to pop out.

The main takeaway from this problem is establishing uniform boundedness. Since \(f_n(0)=0\) for all \(n\), and since they were all equicontinuous, we were able to get this for free. More generally, if you know that a sequence of equicontinuous functions is bounded at a single point, you can apply the same argument above to get uniform boundedness!

Problem 7. UCLA Basic Exam, Fall 2020

Let \(S\) be a subset of \(\mathbb{R}^n\). Prove that the following are equivalent:

  1. For each \(p\in S\), there exists an open subset \(V\subseteq \mathbb{R}^n\) containing \(p\), and open subset \(U\subseteq \mathbb{R} ^{n-1}\), and an injective \(C^1\) function \(r:U\to \mathbb{R}^n\) such that \(r(U) = V\cap S\) and \(Dr\) is injective on \(U\).
  2. For each \(p\in S\), there exists an open neighbourhood \(V\subseteq \mathbb{R}^n\) of \(p\) and a \(C^1\) function \(f:V\to \mathbb{R}\) such that \(Df\) is nonzero on \(V\cap S\) and \(V\cap S = f ^{-1}\left( \left\lbrace 0 \right\rbrace \right)\).

Hint
Think of the implicit function theorem!

Shoutout to Hayden for helping with the solution in office hours (:

Solution (1 implies 2)

Let \(p\in S\), and let \(r, U, V\) be as in the problem statement. Let \(q\in U\) such that \(r(q)=p\).

Since \(r\) has injective derivative, the columns of \(Dr\) must be linearly independent from at least one of the standard basis vectors \(e_1,\ldots, e_n\). Suppose without loss of generality that the columns of \(Dr\) are linearly independent from \(e_n\). Define the function \(F: U\times \mathbb{R} \to \mathbb{R}^n\) via \[F\left( x_1,\ldots, x_n \right) = r \left( x_1,\ldots, x _{n-1} \right) + x_n \cdot e_n.\] Then, we have that the first \(n-1\) columns of \(\left. DF\right\rvert_{(q, 0)}\) are just \(\left. Dr\right\rvert_{q}\) while the last column of \(\left. DF\right\rvert_{(q, 0)}\) is simply \(e_n\). Thus, \(\left. DF\right\rvert_{(q,0)}\) has \(n\) linearly independent columns, and it is invertible. By the inverse function theorem, there is an open neighbourhood \(V’\) of \(p\), an open subset \(U’\subseteq U\), and some interval \(I\) containing \(0\) such that \(F : U’\times I\to V’\) is a \(C^1\) bijection with \(C^1\) inverse. Let \(F ^{-1} : V’\to U’\times I\) be this inverse, and write its components as \(F ^{-1} = \left( f_1,\ldots, f_n \right)\).

Since \(F ^{-1}\circ F\) is the identity map, we in particular have \[f_n \circ F = f_n \left( r\left( x_1,\ldots, x _{n-1} \right) + x_n e_n \right) = x_n.\] Thus, \(f_n\left( y_1,\ldots, y_n \right) = 0\) precisely when \(\left( y_1,\ldots, y_n \right)\) is in the image of \(r\). That is to say, \(f_n ^{-1} \left( \left\lbrace 0 \right\rbrace \right) = V’\cap S\), the image of \(r\).

Solution Sketch (2 implies 1)

This is the classic implicit function theorem!

Fix \(p\in S\), and let \(V\) be an open neighbourhood of \(p\) and \(f:V\to \mathbb{R}\) as in the problem statement. Since \(Df\neq 0\) at \(p\), at least one partial derivative of \(f\) is nonzero at \(p\). We may assume without loss of generality that \(\frac{\partial f}{\partial x_n}(p)\neq 0\).

By the implicit function theorem, there exists an open neighbourhood \(V’\subseteq V\) of \(p\), an open subset \(U\subseteq \mathbb{R} ^{n-1}\), and a \(C^1\) function \(\phi : U\to \mathbb{R}\) such that \[f\left( x_1,\ldots, x _{n-1}, \phi \left( x_1,\ldots, x _{n-1} \right) \right) = 0\] on \(U\).

Then, we simply define \[r\left( x_1,\ldots, x _{n-1} \right) = \left( x_1,\ldots, x _{n-1}, \phi \left( x_1,\ldots, x _{n-1} \right) \right).\]

There are two things to verify:

  1. \(r(U) = V’\cap S\), and
  2. \(Dr\) is injective everywhere on \(U\). These cannot be verified from the conclusion of the implicit function theorem directly; however, replicating the proof of the implicit function theorem (i.e. shoehorning an application of the inverse function theorem) and retaining the additional information there will get you all the way to the end.