Hunter Liu's Website

Spring 2021 Analysis Qual Solutions

Date Written: March 6, 2024; Last Modified: March 6, 2024

It’s been a while since I wrote up solutions to a qual. This takes a lot of time and energy! But it’s a good tool for studying. Hopefully this set of solutions is more or less correct.

Not all of these solutions are mine; Ben Johnsrude has a great set of qual solutions that I have been using as a reference. In particular, the solutions to 11b and 8 were borrowed from him!

   Problem    Description
Problem 1 An annoying but straightforward application of Lebesgue-Radon-Nikodym.
Problem 2 A somewhat tricky problem about mutually singular measures. My solution uses some facts about convergence in measure and Egorov’s theorem.
Problem 3 Showing that a sequence of integral transforms converges both almost everywhere and in \(L^2\). This uses a pretty thematic technique of approximating on a dense subset.
Problem 4 A classic application of Arzelà–Ascoli.
Problem 5 An unremarkable problem about the absolute convergence of a series.
Problem 6 A very technical problem about uniform convergence.
Problem 7 A question that tests if you know how to solve the Dirichlet problem on a funny domain. This uses the Poisson kernel.
Problem 8 An impossible problem that I don’t know how to solve.
Problem 9 A classic Schwarz Lemma problem.
Problem 10 A problem involving Morera’s theorem, meromorphic continuation, and computing residues.
Problem 11 A problem about iterates of a function, which uses essential singularities and Picard’s great theorem.
Problem 12 An intimidating yet straightforward application of Jensen’s formula.

Problem 1.

Let \(\mu\) be a positive Borel probability measure on \([0, 1]\), and let \[C=\sup \left\lbrace \mu(E) : E\subset [0, 1]\textrm{ with }\left\lvert E \right\rvert=\frac{1}{2} \right\rbrace,\] where \(\left\lvert E \right\rvert\) denotes the Lebesgue measure of \(E\). Show that there exists a Borel set \(F\subset [0, 1]\) such that \(\left\lvert F \right\rvert=\frac{1}{2} \) and \(\mu(F)=C\).

Hint: When \(d\mu=f\ dx\), one can sometimes take \(F=\left\lbrace x\in [0, 1] : f(x)>\lambda \right\rbrace\) for a suitable \(\lambda\geq 0\).

By Lebesgue-Radon-Nikodym, we may decompose \(\mu=\mu_c+\mu_s\) where \(\mu_c\) is absolutely continuous to Lebesgue measure while \(\mu_s\) is mutually singular to Lebesgue measure. Hence there is a set \(A\) such that \(\left\lvert A \right\rvert=0\) and \(\mu_s\left( [0, 1]\setminus A \right)=0\).

We now define the constant \[C’=\sup \left\lbrace \mu_c(E) : E\subseteq [0, 1]\textrm{ with }\left\lvert E \right\rvert=\frac{1}{2} \right\rbrace.\] We claim it suffices to show that there exists some \(F\subseteq[0, 1]\) such that \(\left\lvert F \right\rvert=\frac{1}{2} \) and \(\mu_c(F)=C’\). This is because for any set \(E\) with \(\left\lvert E \right\rvert=\frac{1}{2}\), we have \(\left\lvert E\cup A \right\rvert=\frac{1}{2} \) while \[\mu\left( E \right)=\mu_c\left( E\setminus A \right)+\mu_s\left( E\cap A \right).\] But at the same time \[\mu\left( E\cup A \right) = \mu_c\left( E\setminus A \right)+\mu_s(A),\] which will always be greater than or equal to \(\mu(E)\). It follows that \(C=C’+\mu_s(A)\). Thus, if \(F\) is any set such that \(\mu_c(F)=C’\) and \(\left\lvert F \right\rvert=\frac{1}{2}\), then \(\mu_c\left( F\setminus A \right)=C’\) by absolute continuity, and so \[\mu\left( F\cup A \right)=\mu_c\left( F \right)+\mu_s(A)=C,\] and of course \(\left\lvert F\cup A \right\rvert=\frac{1}{2}\) still.

Write \(d\mu_c=f\cdot dx\), and consider the sets \[E_\lambda=\left\lbrace x\in [0, 1] : f(x) > \lambda \right\rbrace.\] Observe that \(f\) is nonnegative. If \(\left\lvert E_0 \right\rvert\leq\frac{1}{2}\), then consider the function \(x\mapsto \left\lvert E_0\cup \left[ 0,x \right] \right\rvert\). This is continuous with \(0\mapsto \left\lvert E_0 \right\rvert<\frac{1}{2} \) and \(1\mapsto 1\), so by the intermediate value theorem there is some point \(x_0\) where \(F=E_0\cup \left[ 0, x_0 \right]\) satisfies \(\left\lvert F \right\rvert=\frac{1}{2} \). Moreover, since \(F\) contains the set where \(f\) is nonzero, \(\mu_c(F)=\mu_c\left( [0, 1] \right)\), and it follows that \(\mu_c(F)=C’\) since \(\mu\) is positive.

Otherwise, we consider \[\lambda_0=\inf \left\lbrace \lambda : \left\lvert E_\lambda \right\rvert < \frac{1}{2} \right\rbrace.\] This must exist since \(\mu\) is finite, and \(\lambda_0>0\) since \(\left\lvert E_0 \right\rvert>\frac{1}{2}\). By continuity of measure, since these \(E_\lambda\)’s are increasing as \(\lambda\) decreases, we know that \(\left\lvert E _{\lambda_0} \right\rvert\leq \frac{1}{2} \). There are two cases to consider:

  1. If \(\left\lvert E_{\lambda_0} \right\rvert=\frac{1}{2}\), then we simply take \(F=E_{\lambda_0}\). If \(S\subseteq[0, 1]\) is any subset with \(\left\lvert S \right\rvert=\frac{1}{2}\), then we have \[\mu_c(F)=\int _{F\cap S}f(x)dx + \int _{F\setminus S}f(x) dx\] while \[\mu_c(S)=\int _{F\cap S}f(x)dx + \int _{S\setminus F}f(x) dx.\] Note that \(\left\lvert F\setminus S \right\rvert=\left\lvert S\setminus F \right\rvert\), but by construction \(\int _{F\setminus S}f(x)\geq \lambda_0\left\lvert F\setminus S \right\rvert\geq \int _{S\setminus F}f(x) dx\). Hence \(\mu_c(F)\geq \mu_c(S)\), and we’re done.
  2. If \(\left\lvert E _{\lambda_0} \right\rvert < \frac{1}{2}\), we have for all \(\lambda<\lambda_0\) that \(\left\lvert E_\lambda \right\rvert\geq \frac{1}{2} \). In particular, \(\left\lvert E_\lambda-E _{\lambda_0} \right\rvert\geq\frac{1}{2} -\left\lvert E _{\lambda_0} \right\rvert\). Then, if \(F_0=\left\lbrace x : f(x) = \lambda_0 \right\rbrace\), we have that \[F_0=\bigcap _{\lambda<\lambda_0}\left( E_\lambda\setminus E _{\lambda_0} \right) \implies \left\lvert F_0 \right\rvert\geq \frac{1}{2} -\left\lvert E _{\lambda_0} \right\rvert.\] Then we apply the same continuity trick from the first case: the function \(x\mapsto \left\lvert F_0\cap \left[ 0, x \right] \right\rvert\) is continuous, maps \(0\mapsto 0\), and \(1\mapsto \left\lvert F_0 \right\rvert\geq \frac{1}{2} -\left\lvert E _{\lambda_0} \right\rvert\), so there exists some \(x_0\) where \(\left\lvert F_0\cap \left[ 0, x_0 \right] \right\rvert=\frac{1}{2} - \left\lvert E _{\lambda_0} \right\rvert\). Take \(F=E _{\lambda_0}\cup \left( F_0\cap \left[ 0, x_0 \right] \right)\); since \(F_0\) and \(E _{\lambda_0}\) are disjoint by construction, we see that \(\left\lvert F \right\rvert=\frac{1}{2} \). Now \(f(x)\geq\lambda_0\) whenever \(x\in F\) and \(f(x)\leq \lambda_0\) whenever \(x\notin F\), so the same argument as in case 1 demonstrates that \(\mu_c(F)=C’\).

In any case we have produced an \(F\) such that \(\mu_c(F)=C’\), and by the preceeding discussion this concludes the proof. \(\square\)

Problem 2.

Let \(\mu\) and \(\nu\) be two finite positive Borel measures on \(\mathbb{R}^n\).

  1. Suppose that there exist Borel sets \(A_n\subset X\) so that \[\begin{align*} \lim _{n\to\infty}\mu\left( A_n \right)=0 && \textrm{and} && \lim _{n\to\infty}\nu \left( X\setminus A_n \right)=0.\end{align*}\] Show that \(\mu\) and \(\nu\) are mutually singular.
  2. Suppose there are nonnegative Borel functions \(\left\lbrace f_n \right\rbrace _{n\geq 1}\) so that \(f_n(x)>0\) for \(\nu\)-a.e. \(x\) and \[\begin{align*} \lim _{n\to\infty}\int f_n(x) d\mu(x) = 0 && \textrm{and} && \lim _{n\to\infty}\int \frac{1}{f_n(x)}d\nu(x)=0.\end{align*}\] Show that \(\mu\) and \(\nu\) are mutually singular.

First, we may replace \(A_n\) with a subsequence where \(\chi _{X\setminus A_n}\to 0\) pointwise \(\nu\)-almost everywhere. This is possible because these characteristic functions converge to zero in measure (with respect to \(\nu\)) and \(\nu\) is a finite measure.

Let \(B=\bigcup _{m=1}^{\infty} \bigcap _{j=m} ^\infty A_j\). One has that \(\bigcap _{j=m}^{\infty}A_j\subseteq A_n\) for all \(n\geq m\), hence \(\mu\left( \bigcap _{j=m}^{\infty}A_j \right)\leq \lim _{n\to\infty}\mu\left( A_n \right)=0\) for all \(j\). It follows that \[\mu(B)\leq \sum _{m=1}^{\infty} \mu\left( \bigcap _{j=m}^{\infty}A_j \right) = 0 \implies \mu(B) = 0.\] On the other hand, \(X\setminus B= \bigcap _{m=1}^{\infty}\bigcup _{j=m}^\infty \left( X\setminus A_j \right).\)

First, since \(X\setminus B\) is an intersection of decreasing sets, we have that \[\nu(X\setminus B)=\lim _{m\to\infty}\nu\left( \bigcup _{j=m}^{\infty}A_j \right).\] We claim this limit is \(0\). Fix any \(\epsilon>0\). Since \(\chi _{X\setminus A_n}\to 0\) pointwise \(\nu\)-almost everywhere, by Egorov’s theorem (which applies since \(\nu\) is finite), there is some set \(A_\epsilon\) such that \(\nu\left( A_\epsilon \right)<\epsilon\) and \(\chi _{X\setminus A_n}\to 0\) uniformly on \(X\setminus A_\epsilon\).

This in particular means that for \(n\) sufficiently large, \(\chi _{X\setminus A_n}=0\) on \(X\setminus A_\epsilon\), i.e. \(X\setminus A_n\subseteq A_\epsilon\) for all \(n\) sufficiently large. But then \(\bigcup _{j=m}^{\infty} A_j\subseteq A_\epsilon\) when \(m\) is large, and thus it follows that \[\nu\left( X\setminus B \right)=\lim _{m\to\infty}\nu\left( \bigcup _{j=1}^{\infty}A_j \right) \leq \nu\left( A_\epsilon \right) < \epsilon.\] Taking \(\epsilon\to 0\) yields \(\nu\left( X\setminus B \right)=0\). That the two measures are mutually singular is immediate.

For the second part, define \(A_n=\left\lbrace x : f_n(x) \geq 1 \right\rbrace.\) Then we get \[\lim _{n\to\infty}\mu\left( A_n \right)=\lim _{n\to\infty}\int \chi _{A_n}(x) d\mu(x) \leq \lim _{n\to\infty}\int f_n(x) d\mu(x) = 0,\] where we use the fact that \(\chi _{A_n}(x) \leq f_n(x)\) everywhere by construction.

On the other hand, if \(x\in X\setminus A_n\), then \(f_n(x)<1\), hence \(\frac{1}{f_n(x)}>1=\chi _{X\setminus A_n}(x)\). When \(x\in A_n\), the same inequality is trivially true by nonnegativity of \(f_n(x)\); we ignore when \(f_n(x)=0\) since this happens on a \(\nu\)-null set. Thus we have \[\lim _{n\to\infty}\nu\left( X\setminus A_n \right)=\lim _{n\to\infty}\int \chi _{X\setminus A_n}(x) d\nu(x) < \lim _{n\to\infty} \int _{n\to\infty} \frac{1}{f_n(x)}d\nu(x) = 0.\] Hence the conditions of part 1 have been satisfied, and we conclude that \(\mu\) and \(\nu\) are mutually singular. \(\square\)

Problem 3.

Let \(f\in L^2\left( \mathbb{R} \right)\). For \(n\geq 1\) we define \[f_n(x)=\int _{0}^{2\pi} f(x+t)\cos(nt)dt.\] Prove that \(f_n\) converges to zero both almost everywhere in \(\mathbb{R}\) and in the \(L^2(\mathbb{R})\) topology, as \(n\to\infty\).

For the almost everywhere convergence, fix any \(x\) and any \(\epsilon>0\). Let \(g_\epsilon:\left[ x,x+2\pi \right]\to \mathbb{R}\) be a continuous function such that \(\int _{x}^{x+2\pi}\left\lvert f(s)-g_\epsilon(s) \right\rvert ds<\epsilon\). Note that \(f\in L^2\left( \left[ x,x+2\pi \right] \right)\subseteq L^1\left( \left[ x,x+2\pi \right] \right)\), and continuous functions are dense in the latter. We may now write \[f_n(x)=\int _{0}^{2\pi}\left( f(x+t)-g_\epsilon(x+t) \right)\cos(nt)dt + \int _{0}^{2\pi}g_\epsilon(x+t)\cos(nt)dt.\] We may bound the first integral trivially by Hӧlder: \[ \left\lvert \int _{0}^{2\pi}\left( f(x+t)-g_\epsilon(x+t) \right)\cos(nt)dt \right\rvert \leq \left\lVert f-g_\epsilon \right\rVert _{L^1\left( \left[ x,x+2\pi \right] \right)} \cdot \left\lVert \cos(nt) \right\rVert _{L^\infty \left( \left[ 0,2\pi \right] \right)} < \epsilon.\] Thus we have by the triangle inequality \[\left\lvert f_n(x) \right\rvert< \epsilon+\left\lvert \int _{0}^{2\pi}g_\epsilon(x+t) \cos(nt) dt \right\rvert.\] We claim the second integral converges to zero. In fact it’s helpful to set up a lemma for that:

Lemma: If \(h:\mathbb{R}\to \mathbb{R}\) is uniformly continuous, then \[\lim _{n\to\infty} \int _{0}^{2\pi} h(x+t) \cos(nt) dt = 0\] for all \(x\).

Proof of Lemma
Fix an \(x\), and let \(\epsilon’>0\). By uniform continuity there exists some \(\delta>0\) such that \(\left\lvert y-z \right\rvert<\delta\) implies \(\left\lvert h(y)-h(z) \right\rvert< \epsilon’\). Then, for any \(n\) large enough so that \(\frac{2\pi}{n}<\delta\), we have: \[\begin{align*} \int _{0}^{2\pi} h(x+t)\cos(nt) dt &= \sum _{j=0}^{n-1}\int _{\frac{2\pi}{n}\cdot j}^{\frac{2\pi }{n}\cdot(j+1)} h(x+t)\cos(nt) dt.\\ &= \sum _{j=0}^{n-1}\int _{\frac{2\pi}{n}\cdot j}^{\frac{2\pi }{n}\cdot(j+1)} \left(h(x+t)-h\left( x+\frac{2\pi j}{n} \right)\right)\cos(nt) dt. \end{align*}\] Here, we used that cosine integrates to zero on any period to add in the extra \(h\left( x+\frac{2\pi j}{n} \right)\). But now from Hӧlder again, we have an \(L^\infty\)-bound on the differences in \(h\), namely \(\epsilon’\), resulting in \[\left\lvert \int _{0}^{2\pi}h(x+t)\cos(nt) dt \right\rvert \leq \epsilon’\cdot \int _{0}^{2\pi }\left\lvert \cos(nt) \right\rvert dt.\] This integral is a finite constant independent of \(n\); taking \(\epsilon’\to 0\) yields the claim. \(\square\)

Thus, with the lemma, we see that for every \(x\), \(\lim _{n\to\infty}\left\lvert f_n(x) \right\rvert<\epsilon\); since \(\epsilon\) was arbitrary, we conclude that \(f_n(x)\) converges to \(0\) pointwise everywhere.

The idea for \(L^2\)-convergence is pretty much the same. This time, however, given an \(\epsilon>0\), we take \(g_\epsilon\) a compactly supported continuous function on \(\mathbb{R}\) such that \(\left\lVert f-g_\epsilon \right\rVert _{L^2}<\epsilon\). Of course compactly supported continuous functions are dense in \(L^2\).

We apply the same decomposition as before, and this time we get \[\left\lVert f_n \right\rVert_2 \leq \left\lVert \int _{0}^{2\pi} \left( f(x+t)-g_\epsilon(x+t) \right) \cos(nt) dt \right\rVert _{L^2(x)} + \left\lVert \int _{0}^{2\pi } g_\epsilon(x+t) \cos(nt) dt \right\rVert _{L^2(x)}. \] The first term can be controlled by using Minkowski’s integral inequality and the fact the Lebesgue measure is translation-invariant: \[\begin{align*} \left\lVert \int _{0}^{2\pi}\left( f(x+t)-g_\epsilon(x+t) \right)\cos(nt) dt \right\rVert _{L^2(x)} &\leq \int _{0}^{2\pi} \left\lVert f-g_\epsilon \right\rVert _2 \left\lvert \cos(nt) \right\rvert dt \\ &< \epsilon\int _{0}^{2\pi}\left\lvert \cos(nt) \right\rvert dt. \end{align*}\] Again, this integral is a constant that’s independent of \(n\). The second term can be controlled using the lemma again: the function \[x\mapsto \int _{0}^{2\pi}g_\epsilon(x+t)\cos(nt) dt\] has compact support because \(g_\epsilon\) is compactly supported (it will be zero whenever \(x\) is more than \(2\pi\) away from the support of \(g_\epsilon\)). It converges to zero pointwise, and it can be uniformly bounded (both in \(x\) and in \(n\)) on its support via Hӧlder. Thus, by dominated convergence, \[\left\lVert \int _{0}^{2\pi}g_\epsilon(x+t)\cos(nt)dt \right\rVert _{L^2(x)}\to 0\] as \(n\to\infty\). Hence we conclude that \[\lim _{n\to\infty}\left\lVert f_n \right\rVert _2 \leq \epsilon\int _{0}^{2\pi}\left\lvert \cos(nt) \right\rvert dt,\] and taking \(\epsilon\to 0\) shows that \(f_n\to 0\) in \(L^2\). \(\square\)

Problem 4.

Define \[I(f):=\int _{0}^{1}\left( \frac{1}{2} \left( f’(x) \right)^2+\sin\left( f(x) \right)+\left( f(x) \right)^4 \right) dx\] for any \(f\in C^1\left( [0, 1]; \mathbb{R} \right)\). Let \(f_n\in C^1\left( \left[ 0,1 \right]; \mathbb{R} \right)\) be such that \[I\left( f_n \right)\to\inf _{f\in C^1\left( [0,1];\mathbb{R} \right)} I(f).\] Show that the sequence \(\left\lbrace f_n \right\rbrace\) has a limit point in the space \(C\left( [0, 1] ; \mathbb{R}\right)\).

Since \(I\left( f_n \right)\) is a minimising sequence, it must be bounded above; say \(I\left( f_n \right)\leq M\) for some \(M\). Observe that since \(\sin \left( f_n(x) \right)\geq -1\) for all \(x\), we have \[I\left( f_n \right)\geq \frac{1}{2} \left\lVert f_n’ \right\rVert _{2}^{2}-1+\left\lVert f_n \right\rVert _{4}^{4}.\] In particular, both \(\left\lVert f_n \right\rVert_4\) and \(\left\lVert f_n’ \right\rVert_2\) must be uniformly bounded in \(n\).

We claim the \(f_n\)’s are uniformly bounded and equicontinuous. For the equicontinuity, we have that \[\left\lvert f_n(y)-f_n(x) \right\rvert\leq \int _{x}^{y}\left\lvert f_n’(t) \right\rvert dt \leq \sqrt{y-x} \left\lVert f_n’ \right\rVert_2.\] Since \(\left\lVert f_n’ \right\rVert_2\) is uniformly bounded in \(n\), uniform equicontinuity is immediate. For the uniform boundedness, we have \[\left\lvert f_n(x) \right\rvert=\left\lvert f_n(0)+\int _{0}^{x} f_n’(t) dt \right\rvert\leq \left\lvert f_n(0) \right\rvert+\int _{0}^{x}\left\lvert f_n’(t) \right\rvert dt \leq \left\lvert f_n(0) \right\rvert+\sqrt x \left\lVert f_n’ \right\rVert_2.\] We got the last inequality from Cauchy-Schwarz. Since \(\sqrt x \left\lVert f_n’ \right\rVert_2\) is uniformly bounded in \(x\) and in \(n\), it suffices to show that \(\left\lvert f_n(0) \right\rvert\) is uniformly bounded. We have \[\int _{0}^{1}\left\lvert f_n(x) \right\rvert^4 dx = \int _{0}^{1} \left\lvert f_n(0) + \int _{0}^{x} f_n’(t) dt \right\rvert^4 dx \geq \int _{0}^{1} \left( \left\lvert f_n(0) \right\rvert - \left\lvert \int _{0}^{x} f_n’(t) dt \right\rvert \right)^4 dx\] from the reverse triangle inequality. But again \(\int _{0}^{x} f_n’(t) dt\) is uniformly bounded in \(x\) and \(n\), so if \(\left\lvert f_n(0) \right\rvert\) were unbounded, then \(\left\lVert f_n \right\rVert_4\) would also be unbounded. This would be a contradiction, so we conclude that \(\left\lvert f_n(0) \right\rvert\) is bounded; uniform boundedness of the \(f_n\)’s follows.

Finally, by Arzelà–Ascoli, the \(f_n\)’s have a convergent subsequence, and thus the sequence \(\left\lbrace f_n \right\rbrace\) has a limit point in \(C\left( [0, 1] \right)\), as desired. \(\square\)

Problem 5.

Let \(\mathbf x\in \mathbb{R}^\mathbb{N}\) be such that the series \[\sum _{i=1}^{\infty}x_iy_i\] converges for all \(\mathbf y\in \mathbb{R}^\mathbb{N}\) such that \(\lim_ny_n=0\). Show that the series \(\sum _{n=1}^{\infty}\left\lvert x_n \right\rvert\) converges.

Suppose towards a contradiction that \(\sum _{n=1}^{\infty}\left\lvert x_n \right\rvert\) diverges. Then, there exists some integer \(n_1\) such that \(\sum _{n=1}^{n_1}\left\lvert x_n \right\rvert>1\). From there, inductively define the integers \(n_j\) such that \(\sum _{n=n_{j-1}+1}^{n_j} \left\lvert x_n \right\rvert > j\). The idea is that the terms between the indices \(n_{j-1}\) and \(n_j\) contribute at least \(j\) to the absolute series.

Define the sequence \(y_n=\operatorname{sgn}(x_n) \cdot \frac{1}{j} \) whenever \(n_{j-1} < n \leq n_j\), with \(n_0=0\) for ease of notation. Then clearly \(\lim _{n\to\infty}y_n=0\), but we have \[\begin{align*} \sum _{i=1}^{\infty}x_iy_i &=\sum _{j=1}^{\infty} \sum _{i=n_{j-1}+1}^{n_j} x_iy_i \\ &= \sum _{j=1}^{\infty}\sum _{i=n_{j-1}+1}^{n_j} \frac{\left\lvert x_i \right\rvert}{j} \\ &> \sum _{j=1}^{\infty} 1. \end{align*}\] This is a contradiction, so we conclude that \(\sum _{i=1}^{\infty}\left\lvert x_i \right\rvert\) must converge. \(\square\)

Problem 6.

We say that a linear operator \(T:C\left( [0,1] \right)\to C\left( [0,1 \right)\) is positive if \(T(f)(x)\geq 0\) for all \(x\in [0, 1]\) whenever \(f\in C[0, 1]\) satisfies \(f(x)\geq 0\) for all \(x\in [0, 1]\). Let \(T_n:C\left( \left[ 0,1 \right] \right)\to C\left( [0,1] \right)\) be a sequence of positive linear operators such that \(T_n(f)\to f\) uniformly on \([0,1]\) whenever \(f\) is a polynomial of degree less than or equal to \(2\). Show that \(T_n(f)\to f\) uniformly on \([0,1]\) for every \(f\in C\left( [0,1] \right)\).

Let \(\epsilon>0\) be arbitrary. We will show that for \(n\) sufficiently large, \(\left\lvert T_n(x)-f(x) \right\rvert\lesssim \epsilon\) for all \(x\), with an implicit constant independent of \(\epsilon\).

First, since \(f\) is continuous and hence uniformly continuous on \([0, 1]\), there exists some \(\delta>0\) such that \(\left\lvert x-y \right\rvert<\delta\) implies \(\left\lvert f(x)-f(y) \right\rvert<\epsilon\). Additionally, \(f\) must be bounded, so let \(M>0\) such that \(\left\lvert f(x) \right\rvert< M\) for all \(x\). Then, we have that for all \(x\) and \(y\), \[\left\lvert f(x)-f(y) \right\rvert < \epsilon + \frac{2M}{\delta^2}\left( x-y \right)^2.\] When \(\left\lvert x-y \right\rvert<\delta\), the right hand side is at least \(\epsilon\), so the inequality is true by our uniform continuity. When \(\left\lvert x-y \right\rvert\geq \delta\), \(\left( x-y \right)^2\geq \delta^2\), and the right hand side is at least \(2M\). But \(\left\lvert f(x)-f(y) \right\rvert\leq 2M\) for all \(x,y\) by the triangle inequality; we conclude this inequality holds for all \(x\) and \(y\).

There exist some finite number of points \(y_1,\ldots, y_m\in \left[ 0,1 \right]\) such that \[\left[ 0,1 \right]\subseteq \bigcup _{i=1}^{m} B\left( y_i, \delta\sqrt{\frac{\epsilon}{2M}} \right).\] Define the polynomials \[\begin{align*} p_i(x) = f\left( y_i \right) - \epsilon - \frac{2M}{\delta^2}\left( x-y_i \right)^2 && \textrm{and} && q_i(x) = \epsilon+\frac{2M}{\delta^2}\left( x-y_i \right)^2-f\left(y_i\right).\end{align*}\] Both \(p_i\) and \(q_i\) are polynomials of degree at most 2, so \(T_np_i\to p_i\) and \(T_nq_i\to q_i\) uniformly for all \(i\). In particular, since there are only finitely many guys here, there exists some \(N_\epsilon\) such that \(n> N_\epsilon\) implies \(\left\lvert T_np_i(x)-p_i(x) \right\rvert<\epsilon\) and \(\left\lvert T_nq_i(x)-q_i(x) \right\rvert<\epsilon\) for all \(i\) and for all \(x\)!

Pick any \(x\in [0, 1]\). There exists some index \(i\) such that \(\left\lvert x-y_i \right\rvert<\delta\sqrt{\frac{\epsilon}{2M}}\) by construction. Then we have \[f(x)-f\left( y_i \right)\leq \left\lvert f(x)-f\left( y_i \right) \right\rvert \implies f(x) \leq \left\lvert f(x)-f\left( y_i \right) \right\rvert + f\left( y_i \right)\leq q_i(x)\] by construction. On the other side, we have \[f\left( y_i \right)-f(x) \leq \left\lvert f(x)-f\left( y_i \right) \right\rvert \implies f(x) \geq f\left(y_i\right)-\left\lvert f(x)-f\left( y_i \right) \right\rvert \geq p_i(x).\] Hence, by positivity of the \(T_n\)’s, we have that \[T_np_i(x) \leq T_nf(x) \leq T_nq_i(x) \implies T_np_i(x) - f(x) \leq T_nf(x)-f(x) \leq T_nq_i(x) - f(x)\] for all \(n\) and all \(x\).

On the right side, we have \[\left\lvert T_nq_i(x)-f(x) \right\rvert\leq \left\lvert T_nq_i(x)-q_i(x) \right\rvert+\left\lvert q_i(x)-f(x) \right\rvert.\] When \(n> N_\epsilon\), the first term is bounded by \(\epsilon\). Moreover, we chose \(i\) so that \(\left\lvert x-y_i \right\rvert<\delta\sqrt{\frac{\epsilon}{2M}}\). In particular, we get \[\left\lvert q_i(x)-f(x) \right\rvert\leq \epsilon+\frac{2M}{\delta}\left( x-y_i \right)^2+\left\lvert f\left( y_i \right)-f(x) \right\rvert.\] Then, using \(\left\lvert f(x)-f\left( y_i \right) \right\rvert\leq \epsilon+\frac{2M}{\delta}\left( x-y_i \right)^2<2\epsilon\) yields the ultimate bound \[\left\lvert T_nq_i(x)-f(x) \right\rvert<5\epsilon.\]

The exact same argument can be used to show that \[\left\lvert T_np_i(x)-f(x) \right\rvert<5\epsilon\] as well, hence we conclude that \[-5\epsilon< T_np_i(x)-f(x) \leq T_nf(x)-f(x) \leq T_nq_i(x)-f(x) < 5\epsilon.\] In particular, \(\left\lvert T_nf(x)-f(x) \right\rvert< 5\epsilon\) for all \(n> N_\epsilon\). Both \(\epsilon\) and \(x\) were arbitrary, and we conclude that \(T_nf\to f\) uniformly. \(\square\)

Problem 7.

Let \(\Omega=\left\lbrace z\in \mathbb{C}:\operatorname{Re}z>0, \operatorname{Im}z>0 \right\rbrace\). Show that there exists a unique bounded harmonic function \(u:\Omega\to \mathbb{R}\) such that for all \(x>0\) and \(y>0\), \[\begin{align*} \lim _{t\to 0}u(x+it)=0 && \textrm{and} && \lim _{t\to 0}u(t+iy)=1. \end{align*}\]

First, we note there is a conformal equivalence \[\varphi: \Omega \xrightarrow{z\mapsto z^2} \mathbb{H} \xrightarrow{ z \mapsto \frac{z-i}{z+i}} \mathbb{D}.\] \(\varphi\) sends \(\mathbb{R}^+\) to the lower half of \(\partial \mathbb{D}\), and it sends \(i\mathbb{R}^+\) to the upper half of \(\partial \mathbb{D}\).

Define \(f\left( e ^{i\theta} \right)=1\) if \(0\leq \theta\leq \pi\) and \(f\left( e ^{i\theta} \right)=0\) otherwise (i.e., \(f\) is the characteristic function of the upper half circle). Define the function \[u(z) = \int _{0}^{2\pi} f\left( e ^{i\theta} \right) \operatorname{Re} \left( \frac{1+z e ^{-i\theta}}{1-ze ^{-i\theta}} \right) d\theta,\] which is just integrating against the Poisson kernel. Since \(f\) is \(L^1\)-integrable on the boundary of the disc, it is well-known that \(u(z)\) defines a bounded harmonic function on the interior of the disc. Moreover, \(u\) extends continuously to the boundary wherever \(f\) is continuous and agrees with \(f\) there.

Since the composition of harmonic functions is harmonic, and since conformal equivalences are harmonic, \(u\circ\varphi\) is a bounded harmonic function on \(\Omega \). Moreover, by the remark at the start about how \(\varphi\) transforms the positive real and imaginary axes, \(u\circ\varphi\) satisfies the conditions in the problem. Thus all that remains is uniqueness.

If \(u\) and \(v\) are two bounded harmonic functions on \(\Omega \) that agree on both positive axes, then \(u-v\) is a bounded harmonic function on \(\Omega \) that approaches zero on each positive axis. By the Phragmen-Lindelӧf principle, it follows that \(u-v\) must be zero on \(\Omega \), and uniqueness follows. \(\square\)

Problem 8.

Show that there exists a non-zero entire function \(f:\mathbb{C}\to \mathbb{C}\) and constants \(b,c\in \mathbb{C}\) satisfying \[\begin{align*} f(0)=0, && f(z+1)=e ^{bz}f(z), && \textrm{and} && f(z+i)=e ^{cz}f(z). \end{align*}\]

I should preface this by saying that I would never have thought of a solution of this. This is one of the questions I would skip without hesitation on the qual. Apparently, it helps to be familiar with the (Jacobi) theta function. Many thanks again to the esteemed Ben Johnsrude for posting a correct solution to this.

When thinking about this problem, the function \[g(z)=e ^{az^2+bz}\] comes to mind. One can quickly check that \[g\left( z+1 \right)=e ^{az^2+2az+a+bz+b}= e ^{a+b}e ^{2az}g(z)\] and \[g\left( z+i \right)=e ^{az^2+2iaz-a+bz+bi}= e ^{-a+bi} e ^{2aiz}g(z).\] So, if one finds some \(h(z)\) satisfying \(h(0)=0\) and similar transformation properties as in the problem statement, multiplying by \(g(z)\) with appropriate choices of \(a\) and \(b\) will solve the problem.

To this end, serendipitously consider the function \[h(z) = \sum _{n\in \mathbb{Z}} \left( -1 \right)^ne ^{-\pi n^2-\pi n-2\pi inz}.\] This converges locally uniformly since one can bound the summand in magnitude by a shifted and rescaled Gaussian, hence \(h\) is entire since the summands are. First, \(h(0)=0\); one can show this by replacing \(n\) with \(-1-n\) in the summation: \[\begin{align*} h(0) &= \sum _{n\in \mathbb{Z}} \left( -1 \right)^n e ^{-\pi n\left( n+1 \right)} \\ &= \sum _{n\in \mathbb{Z}} \left( -1 \right) ^{-1-n} e ^{-\pi \left( -1-n \right)\left( -n \right)} \\ &= - \sum _{n\in \mathbb{Z}} \left( -1 \right) ^{-n} e ^{-\pi \left( n+1 \right)n} \\ &= -h(0).\end{align*}\] Here, we note \(\left( -1 \right) ^{-n}=\left( -1 \right)^n\). Thus \(h(0)=0\).

Now \(h\left( z+1 \right)=h(z)\) since each summand changes by a factor of \(e ^{-2\pi i n}=1\). On the other hand, we have \[\begin{align*} h\left( z+i \right)&=\sum _{n\in \mathbb{Z}} \left( -1 \right)^n e ^{-\pi n^2-\pi n-2\pi in\left( z+i \right)} \\ &=\sum _{n\in \mathbb{Z}} \left( -1 \right) ^{n} e ^{-\pi n^2+\pi n-2\pi inz} \\ &=\sum _{n\in \mathbb{Z}}\left( -1 \right)^{n+1} e ^{-\pi \left( n+1 \right)^2+\pi \left( n+1 \right)-2\pi i\left( n+1 \right)z} \\ &=-e ^{-2\pi iz}\sum _{n\in \mathbb{Z}} \left( -1 \right)^n e ^{-\pi n^2-\pi n-2\pi inz} \\ &= e ^{\pi i}e ^{-2\pi iz}h(z). \end{align*}\] This is almost the right transformation formula. Returning to \(g(z)\), we want \(a+b=0\) and \(-a+bi=\pi i\). With \(a\) and \(b\) satisfying these equations (one may explicitly solve \(a=-\frac{\pi i}{1+i}\) and \(b=\frac{\pi i}{1+i}\), but I won’t use these for brevity), one considers \(f(z)=g(z)\cdot h(z)\). Then, \[f(z+1)=e ^{a+b}e ^{2az}g(z) h(z)= e ^{2az}f(z).\] Likewise, \[f\left( z+i \right)= e ^{-a+bi}e ^{2aiz}g(z) e ^{\pi i} e ^{-2\pi iz}h(z)= e ^{\left( 2ai-2\pi i \right)z}f(z).\] Since \(f(0)=g(0)h(0)=0\), we are done. \(\square\)

Problem 9.

Let \(\Omega_1\subseteq \Omega_2\) be bounded Jordan domains in \(\mathbb{C}\). We also assume that \(0\in \Omega_1\). Now suppose \(f_1:\mathbb{D}\to \Omega_1\) and \(f_2:\mathbb{D}\to \Omega_2\) are Riemann mappings satisfying \(f_1(0)=f_2(0)=0\). Show that \[\left\lvert f_1’(0) \right\rvert\leq \left\lvert f_2’(0) \right\rvert.\]

This is a very simple and direct application of the Schwarz lemma. We have that \(f _{2}^{-1}\circ f_1:\mathbb{D}\to \mathbb{D}\) is injective and holomorphic since both \(f_1\) and \(f_2\) are biholomorphic functions. By the Schwarz lemma and the inverse function theorem, we have \[\left\lvert \left( f _{2}^{-1}\circ f_1 \right)’(0) \right\rvert = \left\lvert \frac{f_1’(0)}{f_2’(0)} \right\rvert \leq 1,\] from which the claim follows. \(\square\)

Problem 10.

Define \[\begin{align*} f(z)=\int _{0}^{1} \frac{t^z}{e^t-1}dt, &&&& z\in \mathbb{C}, \operatorname{Re}z>0. \end{align*}\] Show that \(f\) is an analytic function in \(\left\lbrace z\in \mathbb{C}:\operatorname{Re}z>0 \right\rbrace\) and that it admits a meromorphic continuation \(\hat f\) to the region \(\left\lbrace z\in \mathbb{C} : \operatorname{Re}z>-1 \right\rbrace\). Compute the residue of \(\hat f\) at \(z=0\).

To show that \(f\) is analytic on the right half plane, we consider that \(\left\lvert t ^{z} \right\rvert= t ^{\operatorname{Re}z}\) whine \(e^t-1\geq t\) on \([0, 1]\). Thus, we have that \[\left\lvert f(z) \right\rvert\leq \int _{0}^{1} t ^{\operatorname{Re}z-1} dt,\] hence the integral converges absolutely on the right half plane. If \(\gamma\) is a closed curve in the right half plane, and if \(s_0=\inf \operatorname{Re} \gamma\), one has for all \(z\) in the image of \(\gamma\) that \(\left\lvert t^z \right\rvert\leq t ^{s_0}>0\). First, we have that \[\oint _{\gamma} \left\lvert f(z) \right\rvert dz \leq \oint _{\gamma} \int _{0}^{1} t ^{s_0-1} dt dz,\] which is finite. The integral in \(t\) converges since \(s_0>0\), and one can bound the resulting integral in \(z\) trivially. By Fubini-Tonelli we may thus interchange the integrals: \[\oint _{\gamma} f(z) dz = \int _{0}^{1}\oint _{\gamma} \frac{t^z}{e^t-1} dz dt.\] But \(\oint _{\gamma}t^zdz = 0\) whenever \(\operatorname{Re}z>0\), so we conclude that \(\oint _{\gamma}f(z)dz = 0\) for any closed curve \(\gamma\). By Morerea’s theorem \(f\) is analytic on the right half plane.

For the meromorphic continuation, one should expect to see some integration by parts. The domain of the continuation has shifted to the left by \(1\), suggesting something involving \(z+1\). So, let’s integrate by parts. Rewrite the integrand as \(\frac{t}{e^t-1}\cdot t ^{z-1}dt\); then \[f(z) = \left. \frac{t}{e^t-1}\cdot \frac{t^z}{z}\right\rvert_{0}^1 - \int _{0}^{1} \frac{t^z}{z} \left( \frac{1}{e^t-1}-\frac{te^t}{\left( e^t-1 \right)^2} \right) dt.\] For the first term, we must compute the limit as \(t\to 0\) using L’Hȏpital’s rule: \[\lim _{t\to 0}\frac{t ^{z+1}}{e^t-1} = \lim _{t\to 0}\frac{\left( z+1 \right)t^z}{e^t} = 0.\] Here, we are still taking \(\operatorname{Re}z>0\). Thus, evaluating the first term and rewriting we have \[f(z) = \frac{1}{z(e-1)} - \frac{1}{z}\int _{0}^{1} \frac{t^z \left( e^t (1-t)-1 \right)}{\left( e^t-1 \right)^2} dt.\] Now the first term above is meromorphic on \(\mathbb{C}\), as is \(\frac{1}{z}\). We claim that the integral is holomorphic on \(\operatorname{Re}z>-1\) when \(z\neq 0\). This is because we \(\frac{e^t(1-t)-1}{\left( e^t-1 \right)^2}\) extends continuously to \(t=0\): the limit is \[\lim _{t\to 0}\frac{e^t(1-t)-1}{\left( e^t-1 \right)^2} = \lim _{t\to 0}\frac{e^t(1-t)-e^t}{2e^t\left( e^t-1 \right)} = \lim _{t\to 0}\frac{-t}{2\left( e^t-1 \right)}=-\frac{1}{2}.\] Thus the integral can be bounded by a constant factor of \(\int _{0}^{1} t ^{\operatorname{Re}z} dt\), which converges when \(\operatorname{Re}z>-1\). When \(\gamma\) is a closed curve that doesn’t contain \(0\), then the same Fubini-Tonelli-Morera argument as before shows analyticity away from \(z=0\). We conclude that \(f\) extends to a meromorphic function on \(\operatorname{Re}z>-1\). To compute the residue, we consider \[zf(z) = \frac{1}{e-1} - \int _{0}^{1} t^z \left( \frac{1}{e^t-1}-\frac{te^t}{\left( e^t-1 \right)^2} \right)dt.\] In fact this integral converges when \(z=0\), for we may compute \[\int _{0}^{1}\left( \frac{1}{e^t-1}-\frac{te^t}{\left( e^t-1 \right)^2} \right)dt=\left. \frac{t}{e^t-1}\right\rvert_{0}^1 = \frac{1}{e-1}-1.\] Thus, the residue at \(z=0\) is \(1\). \(\square\)

Problem 11.

For an entire function \(f(z)=f ^{(0)}(z)\), we define \[\begin{align*}f ^{(n)}(z)=f\left( f ^{(n-1)}(z) \right) && \textrm{for all} && n\geq 1.\end{align*}\]

  1. Show that if there exists an \(n\geq 1\) such that \(f ^{(n)}\) is a polynomial, then \(f\) is a polynomial.
  2. Prove that for any \(n\geq 1\) we have \(f ^{(n)}(z)\neq e^z\).

For the first part, suppose that \(f\) is not a polynomial. Since \(f\) is entire, one has \(f\left( z \right)=\sum _{j=0}^{\infty}a_jz^j\) for some coefficients \(a_j\in \mathbb{C}\). This power series has an infinite radius of convergence. Then observing that \(f\left( \frac{1}{z} \right)=\sum _{j=0}^{\infty}a_jz ^{-j}\), we see that since the \(a_j\)’s are nonzero infinitely often (\(f\) is not a polynomial), \(f\left( \frac{1}{z} \right)\) has an essential singularity at the origin.

This establishes that \(f ^{(0)}\left( \frac{1}{z} \right)\) has an essential singularity at the origin. But then \(f ^{(1)}\left( \frac{1}{z} \right)\) also has an essential singularity — \(\lim _{z\to 0}\left( f\left( f\left( \frac{1}{z} \right) \right) \right) ^{-1}\) does not exist! This is because given any point \(w\in \mathbb{C}\), one may pick a sequence \(z_j\to 0\) such that \(f\left( \frac{1}{z_j} \right)\to w\). Thus \(\lim _{z\to 0}f ^{(1)}\left( \frac{1}{z} \right) ^{-1}=\frac{1}{f(w)} \) for any \(w\in \mathbb{C}\), which is bogus. This same reasoning works inductively, and hence we conclude that \(f ^{(n)}\left( \frac{1}{z} \right)\) has an essential singularity at the origin for all \(n\).

If \(g\) is a complex polynomial, then \(g\left( \frac{1}{z} \right)\) has a pole at the origin, and so if \(f\) is not a polynomial, then \(f ^{(n)}\) cannot be a polynomial for any \(n\).

For the second part, suppose \(f ^{(n)}(z)=e^z\) for some \(n\geq 1\). Then \(f ^{(n)}(z)\) omits zero, hence by Picard’s theorem \(f\) must omit exactly one point (it cannot be surjective!). This point must be zero, otherwise \(f ^{(n)}(z)\) would omit two distinct points.

In particular, \(f\) has a holomorphic logarithm; write \(f(z)= e ^{g(z)}\) for some holomorphic entire \(g\). Then, \[e^z=f ^{(n)}(z) = f\left( f ^{\left( n-1 \right)}(z) \right) = e ^{g\circ f ^{(n-1)}(z)}.\] Here, we use \(n\geq 1\) so that \(f ^{(n-1)}\) makes sense. Taking logarithms of both sides, we see that \(g\circ f ^{\left( n-1 \right)}(z)\) and \(z\) must differ by a constant integer factor of \(2\pi i\), and in particular, \(g\circ f ^{\left( n-1 \right)}(z)\) is a polynomial.

\(f\) cannot be a polynomial, so \(f\left( \frac{1}{z} \right)\) must have an essential singularity at the origin. In particular, if \(U\) is any open neighbourhood containing the origin, \(f ^{\left( n-1 \right)}\left( \frac{1}{z} \right)\) maps \(U\setminus \left\lbrace 0 \right\rbrace\) onto \(\mathbb{C}\setminus \left\lbrace 0 \right\rbrace\) by Picard’s theorem. Since \(g\) is holomorphic and entire, \(g\) maps \(\mathbb{C}\setminus \left\lbrace 0 \right\rbrace\) onto \(\mathbb{C}\) minus at most two points; together, \(z\mapsto g\circ f ^{\left( n-1 \right)}\left( \frac{1}{z} \right)\) maps \(U\setminus \left\lbrace 0 \right\rbrace\) onto a twice-punctured complex plane. But \(z\mapsto \frac{1}{z}\) (plus any constant) cannot map \(U\setminus \left\lbrace 0 \right\rbrace\) onto the twice-punctured complex plane, a contradiction. \(\square\)

Problem 12.

Find all entire functions \(f:\mathbb{C}\to \mathbb{C}\) that satisfy the following two properties:

  1. \(\left\lvert f(z) \right\rvert\leq e ^{\left\lvert z \right\rvert^2}\) for all \(z\in \mathbb{C}\).
  2. \(f\left( n ^{\frac{1}{3} } \right)=n\) for all \(n\in \mathbb{N}\).

Let \(f\) be such a function, and consider \(g(z)=f(z)-z^3\). Since \(\left\lvert z^3 \right\rvert\leq e ^{\left\lvert z \right\rvert^2}\), \(\left\lvert g(z) \right\rvert\leq 2 e ^{\left\lvert z \right\rvert^2}\).

Suppose \(g\) is not identically zero. \(g\) is entire and has zeroes at \(n ^{\frac{1}{3} }\) for each \(n\in \mathbb{N}\); let \(z_0\) be very close to \(0\) with \(g\left( z_0 \right)\neq 0\). We have by Jensen’s formula that for any \(R>0\) \[\int _{0}^{2\pi}\log \left\lvert g\left( 2Re ^{i\theta} +z_0\right) \right\rvert d\theta = \log\left\lvert g\left( z_0 \right) \right\rvert + \sum _{\zeta} \log \frac{2R}{\left\lvert \zeta-z_0 \right\rvert},\] where the sum ranges over \(\zeta\) satisfying \(g(\zeta)=0\) and \(\left\lvert \zeta-z_0 \right\rvert<2R\).

On one hand, we have that \[\log \left\lvert g\left( 2R e ^{i\theta}+z_0 \right) \right\rvert \leq\log 2 + \left\lvert 2R e ^{i\theta}+z_0 \right\rvert^2 \leq\log 2 + \left( 2R+\left\lvert z_0 \right\rvert \right)^2.\] On the other hand, we have that \(\left\lvert n ^{\frac{1}{3} }-z_0 \right\rvert \leq R\) if \(\left\lvert n \right\rvert ^{\frac{1}{3} }\leq R-\left\lvert z_0 \right\rvert\), so there are at least \(2\left( R-\left\lvert z_0 \right\rvert \right)^3\) zeroes within the disc of radius \(R\) centred at \(z_0\). Thus, for the right-hand-side of Jensen’s formula, we have \[\sum _{\zeta}\log \frac{2R}{\left\lvert \zeta-z_0 \right\rvert} \geq \sum _{\left\lvert n \right\rvert\leq \left( R-\left\lvert z_0 \right\rvert \right)^3}\log \frac{2R}{\left\lvert n ^{\frac{1}{3} }-z_0 \right\rvert} \geq \sum _{\left\lvert n \right\rvert\leq \left( R-\left\lvert z_0 \right\rvert \right)^3}\log 2 \geq 2\left( R-\left\lvert z_0 \right\rvert \right)^3\log 2. \] Putting it all together, we get that for every \(R\), \[2\pi \left( \log 2+\left( 2R+\left\lvert z_0 \right\rvert \right)^2 \right) \geq \log \left\lvert g\left( z_0 \right) \right\rvert+2\left( R-\left\lvert z_0 \right\rvert \right)^3\log 2.\] Letting \(R\to\infty\), we see that this is a contradiction, and we conclude that \(g\) must have been identically zero. Thus the only function satisfying these two properties is \(f(z)=z^3\) itself. \(\square\)