I put together a handful of practise problems for the upcoming final. You can access them in pdf format if you want, but I’ll provide all the problems with some solutions and commentary in web format below.
Problem 1. The Babylonian Method
Define the sequence of functions {fn(x)}n=0∞ on [1,∞) by f0(x)=x and
fn+1(x)=21(fn(x)+fn(x)x).
Show that for any compact set K⊆[1,∞), {fn(x)} converges uniformly on K. Does {fn(x)} converge uniformly on [1,∞)?
Hint
Try applying Dini’s theorem. How can you verify the conditions?
Outline of Proof
I’ll only give an outline of the proof, as I think the details of each step are relatively straightforward. As stated in the above hint, we should attempt to use Dini’s theorem.
Show that for any fixed x, one has fn(x)≥x for all x. This can be done by induction and computing the derivative of the function g(y)=21(y+yx).
Show that for any fixed x, the sequence {fn(x)}n=1∞ is nonincreasing. This can be done with another inductive argument.
These three steps set up the conditions of Dini’s theorem, and from there one gets convergence on compact subsets.
To see that the sequence does not converge uniformly on [1,∞), one should show (again using an inductive argument) that
x→∞limfn(x)=2−n
for all n. Morally speaking, this says that fn grows like a linear function of slope 2−n, which (for fixed n) grows much faster than x in the long run.
Some Remarks
This is based on the Babylonian method for computing squareroots, also known as Heron’s method. One way of interpreting it is that it’s just running Newton’s method on the function (\sqrt x).
The point of this problem is (partially) to remind everyone about the conditions of Dini’s theorem, particularly the condition that the domain is compact. This example is an example of Dini’s theorem failing when the domain is not compact!
Problem 2. Orthogonality of Even and Odd Functions
Let f:R→R be continuous, and suppose that for every odd function ϕ∈C0∞(R) (i.e., ϕ(x)=−ϕ(−x) for all x) one has
∫f(x)ϕ(x)dx=0.
Prove that f(x)=f(−x) for all x.
Let f∈C0∞(R), and suppose one has
∫f(x)exp(ax2)dx=0
for all a∈R. Show that f is odd, i.e. f(x)=−f(−x) for all x.
Hint
For part 2, perform a change of variables to show that
∫0∞(f(x)+f(−x))exp(ax2)dx=0
for all a∈R. Then, argue that this implies f(x)+f(−x)=0 for all x.
Solution (Part 1)
For part 1, the idea is that the integral can “detect” whenever f(x)=f(−x). To make precise what this means, we have for any odd function ϕ∈C0∞(R) that
∫−∞∞f(x)ϕ(x)dx=∫0∞(f(x)−f(−x))ϕ(x)dx.
This can be done by breaking up the integral into positive and negative domains, then performing a change of variables to get
∫−∞0f(x)ϕ(x)dx=−∫∞0f(−x)ϕ(−x)dx=−∫0∞f(−x)ϕ(x)dx.
Thus, we may reframe the assumptions to saying that
∫0∞(f(x)−f(−x))ϕ(x)dx=0
for any ϕ∈C0∞([0,∞)) satisfying ϕ(0)=0 (this is necessary by the odd-ness of ϕ).
If f(y)−f(−y)=0 for some fixed y>0 (this cannot happen for y=0), then f(x)−f(−x)=0 for x∈(y−ϵ,y+ϵ) with ϵ sufficiently small. Take ϕ to be a nonnegative bump function supported on this interval, and this yields a contradiction. Thus f(y)−f(−y)=0 for all y>0, and the claim follows.
Solution (Part 2)
We would like to make the same argument as in part 1; performing the same change of variables and using the fact that exp(ax2) is even for any a∈R, we have
∫0∞(f(x)+f(−x))exp(ax2)dx=0
for all a∈R. Fix some M>0 such that the support of f is contained in [−M,M]. Then, truncating the integral yields
∫0M(f(x)+f(−x))exp(ax2)dx=0
for all a∈R.
We now apply Stone-Weierstrass. Let A be the set of functions of the form
ϕ(x)=n=1∑Ncnexp(anx2),
where N is a natural number and an,cn∈R. A contains all constant functions, it is a real subalgebra of C([0,M]), and it separates points when restricted to [0,M]. Thus, by Stone-Weierstrass, A is dense in C([0,M]).
Note that ∫0M(f(x)+f(−x))ϕ(x)dx=0 for all ϕ∈A by assumption and by the linearity of the integral. But at the same time, we have for any ψ∈C([0,M]) and any ϵ>0 there exists some ϕϵ∈A such that supx∈[0,M]∣ψ(x)−ϕϵ(x)∣<ϵ. Then, we get
∫0M(f(x)+f(−x))ψ(x)dx=∫0M(f(x)+f(−x))(ψ(x)−ϕϵ(x))dx,
and this is bounded in magnitude by
ϵ⋅∫0M∣f(x)+f(−x)∣dx.
Taking ϵ→0 shows that ∫0M(f(x)+f(−x))ψ(x)dx=0 for all ψ∈C([0,M]). By the argument in part 1, it follows that f(x)+f(−x)=0 for all x∈[0,M].
Remark
The point of this problem is primarily to present a somewhat surprising application of Stone-Weierstrass.
More broadly, there is a recurring theme in analysis wherein some properties or quantities defined in terms of integrals are only well-understood for a certain class of functions (e.g., on smooth functions). In order to study these properties on a broader class of functions, one must perform some kind of approximation. Both mollifiers and this application of Stone-Weierstrass are great examples of this theme!
A second less significant point of this problem is that the space of continuous functions can be decomposed into an even and odd part: given a function f, 21(f(x)+f(−x)) and 21(f(x)−f(−x)) are even and odd functions respectively that sum to f. Moreover, this is an “orthogonal decomposition” with respect to the inner product
⟨f,g⟩=∫f(x)g(x)dx.
This problem demonstrates the “orthogonality” of this decomposition!
Problem 3. Weierstrass M-test for Products
Let {fn} be a sequence of bounded continuous functions R→R. Let Mn=supx∈R∣fn(x)∣, and suppose ∑n=1∞Mn converges. Prove that
F(x)=n=1∏∞(1+fn(x))=N→∞limn=1∏N(1+fn(x))
converges for each x and is continuous.
Solution
The setup of the problem (and the title I added) should be reminiscent of the Weierstrass M-test; however, this deals with a product rather than a sum. We should therefore take logarithms to convert the product into a sum!
First, we may assume without loss of generality that Mn<1 for all n. In order for ∑n=1∞Mn<∞, there must be some N0 such that n≥N0 implies Mn<1. Then, we get
F(x)=n=1∏N0−1(1+fn(x))⋅n=N0∏∞(1+fn(x)).
From here, we need only show that the latter infinite product is continuous, as the first term is a finite product of continuous functions and hence continuous.
The point of this rediction is that Mn<1 for all n implies 1+fn(x)>0 for all n and x. This allows us to actually take logarithms. We consider
log(n=1∏N(1+fn(x)))=n=1∑Nlog(1+fn(x)).
Taking a second-order Taylor expansion of log(1+y) around y=0 yields
log(1+y)=y−21y2⋅(1+ξ)21,
where ξ is some number between 0 and y. Hence, we get for each n and every x that
∣log(1+fn(x))∣≤Mn+21Mn2⋅(1−Mn)21.
This is summable: we know by assumption that ∑n=1∞Mn<∞. Moreover, eventually Mn<21, so eventually (1−Mn)21<2. Thus ∑21Mn2⋅(1−Mn)21<∞, where here Mn<1⟹Mn2<Mn justifies the summability of Mn2.
Hence,
log(n=1∏N(1+fn(x)))=n=1∑N=log(1+fn(x))
converges to a continuous function as N→∞ by the Weierstrass M-test. Since exp is continuous everywhere, we have that
exp(n=1∑Nlog(1+fn(x)))=n=1∏N(1+fn(x))
converges to a continuous function as N→∞, as desired.
Remark
The point of this problem was, on one hand, to remember that the Weierstrass M-test is overpowered, but on the other hand, to demonstrate an application of Taylor expansions. The Taylor expansion is a critical step here, as one needs to verify the summability of sup∣log(1+fn(x))∣. This supremum is NOT going to be log(1+Mn); rather, it will be controlled by ∣log(1−Mn)∣, and this ends up being greater than Mn!
Problem 4.
Let 0<α≤1. Recall the space of α-Hölder continuous functions on [0,1], denoted Cα([0,1]), the set of continuous functions f:[0,1]→R satisfying
∥f∥Cα:=x∈[0,1]sup∣f(x)∣+x=ysup∣x−y∣α∣f(x)−f(y)∣<∞.
Let S={f∈Cα([0,1]):f(0)=0}. Show that S is a closed subset of Cα([0,1]). Is S compact?
Remark
The problem as stated is kind of a triviality. I completely botched the problem statement to tell you the truth. I think something more appropriate would be, let
S={f∈Cα([0,1]):∥f∥Cα≤1,f(0)=0}.
Show that S is a compact subset of C([0,1]) with respect to the uniform norm. The point is to illustrate that boundedness at a single point plus equicontinuity is enough to use Ascoli’s theorem.
Problem 5.
Let H={(x,y)∈R2:x,y≥0,x2+y2≤1}. Prove that for any ϵ>0 and any continuous function f:H→R there exists a function g(x,y) of the form
g(x,y)=m=0∑Nn=0∑Namnx2my2n
for some natural number N and real numbers amn satisfying
(x,y)∈Hsup∣g(x,y)−f(x,y)∣<ϵ.
Does the result still hold if H is replaced by
D={(x,y)∈R2:x2+y2≤1}?
Partial Solution
You should recognise this as applying Stone-Weierstrass to C(H;R)! Verify that the conditions are met.
The result no longer holds when H is replaced with D. Consider f(x,y)=x. Let ϵ=21, and suppose a g(x,y) of the given form exists satisfying
(x,y)∈Dsup∣f(x,y)−g(x,y)∣<ϵ.
Let A=g(21,0)=g(−21,0). Then, since f(21,0)=21 and f(−21,0)=−21, we have
21−A<21 and −21−A<21,
which is impossible.
In particular, Stone-Weierstrass fails because the subalgebra of “even polynomials” in x and y no longer separates points.
Problem 6. UCLA Basic Exam, Fall 2021
Let an be any sequence of real numbers. Show that the sequence of functions fn:[0,1]→R defined by
fn(x)=∫0xexp(t8−6cos2(ant))dt
has a subsequence that converges uniformly on [0,1].
Solution
We have for any x that
∣fn’(x)∣=exp(x8−6cos2(anx))≤e
by the fundamental theorem of calculus. Here, we used the monotonicity of exp and the fact that x∈[0,1] to get this upper bound.
Note that fn(0)=0 for all n. By the mean value theorem, we have for all x and y that
∣fn(x)−fn(y)∣≤e⋅∣x−y∣
for any n. In particular, taking y=0 yields
∣fn(x)∣≤e⋅∣x∣≤e
for all n and for all x∈[0,1]. Since {fn} is equicontinuous and uniformly bounded, we conclude by Ascoli’s theorem that it has a uniformly converging subsequence.
Remark
Problems like this may look scary, but the complicated integrand is an intimidation tactic. Ascoli’s theorem is pretty much the only tool that can give uniformly converging subsequences of functions, so one should expect some kind of equicontinuity to pop out.
The main takeaway from this problem is establishing uniform boundedness. Since fn(0)=0 for all n, and since they were all equicontinuous, we were able to get this for free. More generally, if you know that a sequence of equicontinuous functions is bounded at a single point, you can apply the same argument above to get uniform boundedness!
Problem 7. UCLA Basic Exam, Fall 2020
Let S be a subset of Rn. Prove that the following are equivalent:
For each p∈S, there exists an open subset V⊆Rn containing p, and open subset U⊆Rn−1, and an injective C1 function r:U→Rn such that r(U)=V∩S and Dr is injective on U.
For each p∈S, there exists an open neighbourhood V⊆Rn of p and a C1 function f:V→R such that Df is nonzero on V∩S and V∩S=f−1({0}).
Hint
Think of the implicit function theorem!
Shoutout to Hayden for helping with the solution in office hours (:
Solution (1 implies 2)
Let p∈S, and let r,U,V be as in the problem statement. Let q∈U such that r(q)=p.
Since r has injective derivative, the columns of Dr must be linearly independent from at least one of the standard basis vectors e1,…,en. Suppose without loss of generality that the columns of Dr are linearly independent from en. Define the function F:U×R→Rn via
F(x1,…,xn)=r(x1,…,xn−1)+xn⋅en.
Then, we have that the first n−1 columns of DF∣(q,0) are just Dr∣q while the last column of DF∣(q,0) is simply en. Thus, DF∣(q,0) has n linearly independent columns, and it is invertible. By the inverse function theorem, there is an open neighbourhood V’ of p, an open subset U’⊆U, and some interval I containing 0 such that F:U’×I→V’ is a C1 bijection with C1 inverse. Let F−1:V’→U’×I be this inverse, and write its components as F−1=(f1,…,fn).
Since F−1∘F is the identity map, we in particular have
fn∘F=fn(r(x1,…,xn−1)+xnen)=xn.
Thus, fn(y1,…,yn)=0 precisely when (y1,…,yn) is in the image of r. That is to say, fn−1({0})=V’∩S, the image of r.
Solution Sketch (2 implies 1)
This is the classic implicit function theorem!
Fix p∈S, and let V be an open neighbourhood of p and f:V→R as in the problem statement. Since Df=0 at p, at least one partial derivative of f is nonzero at p. We may assume without loss of generality that ∂xn∂f(p)=0.
By the implicit function theorem, there exists an open neighbourhood V’⊆V of p, an open subset U⊆Rn−1, and a C1 function ϕ:U→R such that
f(x1,…,xn−1,ϕ(x1,…,xn−1))=0
on U.
Then, we simply define
r(x1,…,xn−1)=(x1,…,xn−1,ϕ(x1,…,xn−1)).
There are two things to verify:
r(U)=V’∩S, and
Dr is injective everywhere on U.
These cannot be verified from the conclusion of the implicit function theorem directly; however, replicating the proof of the implicit function theorem (i.e. shoehorning an application of the inverse function theorem) and retaining the additional information there will get you all the way to the end.