We study the concept of quasi-analytic functions, which are quite close to being analytic.

Read moreSuppose \(1 < p < \infty\) and \(f \in L^p((0,\infty))\) (with respect to Lebesgue measure of course) is a nonnegative function, take \[ F(x) = \frac{1}{x}\int_0^x f(t)dt \quad 0 < x <\infty, \] we have Hardy's inequality \(\def\lrVert[#1]{\lVert #1 \rVert}\) \[ \lrVert[F]_p \leq q\lrVert[f]_p \] where \(\frac{1}{p}+\frac{1}{q}=1\) of course.

There are several ways to prove it. I think there are several good
reasons to write them down thoroughly since that may be why you find
this page. Maybe you are burnt out since it's *left as exercise*.
You are assumed to have enough knowledge of Lebesgue measure and
integration.

Let \(S_1,S_2 \subset \mathbb{R}\) be two measurable set, suppose \(F:S_1 \times S_2 \to \mathbb{R}\) is measurable, then \[ \left[\int_{S_2} \left\vert\int_{S_1}F(x,y)dx \right\vert^pdy\right]^{\frac{1}{p}} \leq \int_{S_1} \left[\int_{S_2} |F(x,y)|^p dy\right]^{\frac{1}{p}}dx. \] A proof can be found at here by turning to Example A9. You may need to replace all measures with Lebesgue measure \(m\).

Now let's get into it. For a measurable function in this place we should have \(G(x,t)=\frac{f(t)}{x}\). If we put this function inside this inequality, we see \[ \begin{aligned} \lrVert[F]_p &= \left[\int_0^\infty \left\vert \int_0^x \frac{f(t)}{x}dt \right\vert^p dx\right]^{\frac{1}{p}} \\ &= \left[\int_0^\infty \left\vert \int_0^1 f(ux)du \right\vert^p dx\right]^{\frac{1}{p}} \\ &\leq \int_0^1 \left[\int_0^\infty |f(ux)|^pdx\right]^{\frac{1}{p}}du \\ &= \int_0^1 \left[\int_0^\infty |f(ux)|^pudx\right]^{\frac{1}{p}}u^{-\frac{1}{p}}du \\ &= \lrVert[f]_p \int_0^1 u^{-\frac{1}{p}}du \\ &=q\lrVert[f]_p. \end{aligned} \] Note we have used change-of-variable twice and the inequality once.

I have no idea how people came up with this solution. Take \(xF(x)=\int_0^x f(t)t^{u}t^{-u}dt\) where \(0<u<1-\frac{1}{p}\). Hölder's inequality gives us \[ \begin{aligned} xF(x) &= \int_0^x f(t)t^ut^{-u}dt \\ &\leq \left[\int_0^x t^{-uq}dt\right]^{\frac{1}{q}}\left[\int_0^xf(t)^pt^{up}dt\right]^{\frac{1}{p}} \\ &=\left(\frac{1}{1-uq}x^{1-uq}\right)^{\frac{1}{q}}\left[\int_0^xf(t)^pt^{up}dt\right]^{\frac{1}{p}} \end{aligned} \] Hence \[ \begin{aligned} F(x)^p & \leq \frac{1}{x^p}\left\{\left(\frac{1}{1-uq}x^{1-uq}\right)^{\frac{1}{q}}\left[\int_0^xf(t)^pt^{up}dt\right]^{\frac{1}{p}}\right\}^{p} \\ &= \left(\frac{1}{1-uq}\right)^{\frac{p}{q}}x^{\frac{p}{q}(1-uq)-p}\int_0^x f(t)^pt^{up}dt \\ &= \left(\frac{1}{1-uq}\right)^{p-1}x^{-up-1}\int_0^x f(t)^pt^{up}dt \end{aligned} \]

Note we have used the fact that \(\frac{1}{p}+\frac{1}{q}=1 \implies p+q=pq\) and \(\frac{p}{q}=p-1\). Fubini's theorem gives us the final answer: \[ \begin{aligned} \int_0^\infty F(x)^pdx &\leq \int_0^\infty\left[\left(\frac{1}{1-uq}\right)^{p-1}x^{-up-1}\int_0^x f(t)^pt^{up}dt\right]dx \\ &=\left(\frac{1}{1-uq}\right)^{p-1}\int_0^\infty dx\int_0^x f(t)^pt^{up}x^{-up-1}dt \\ &=\left(\frac{1}{1-uq}\right)^{p-1}\int_0^\infty dt\int_t^\infty f(t)^pt^{up}x^{-up-1}dx \\ &=\left(\frac{1}{1-uq}\right)^{p-1}\frac{1}{up}\int_0^\infty f(t)^pdt. \end{aligned} \] It remains to find the minimum of \(\varphi(u) = \left(\frac{1}{1-uq}\right)^{p-1}\frac{1}{up}\). This is an elementary calculus problem. By taking its derivative, we see when \(u=\frac{1}{pq}<1-\frac{1}{p}\) it attains its minimum \(\left(\frac{p}{p-1}\right)^p=q^p\). Hence we get \[ \int_0^\infty F(x)^pdx \leq q^p\int_0^\infty f(t)^pdt, \] which is exactly what we want. Note the constant \(q\) cannot be replaced with a smaller one. We simply proved the case when \(f \geq 0\). For the general case, one simply needs to take absolute value.

This approach makes use of properties of \(L^p\) space. Still we assume that \(f \geq 0\) but we also assume \(f \in C_c((0,\infty))\), that is, \(f\) is continuous and has compact support. Hence \(F\) is differentiable in this situation. Integration by parts gives \[ \int_0^\infty F^p(x)dx=xF(x)^p\vert_0^\infty- p\int_0^\infty xdF^p = -p\int_0^\infty xF^{p-1}(x)F'(x)dx. \] Note since \(f\) has compact support, there are some \([a,b]\) such that \(f >0\) only if \(0 < a \leq x \leq b < \infty\) and hence \(xF(x)^p\vert_0^\infty=0\). Next it is natural to take a look at \(F'(x)\). Note we have \[ F'(x) = \frac{f(x)}{x}-\frac{\int_0^x f(t)dt}{x^2}, \] hence \(xF'(x)=f(x)-F(x)\). A substitution gives us \[ \int_0^\infty F^p(x)dx = -p\int_0^\infty F^{p-1}(x)[f(x)-F(x)]dx, \] which is equivalent to say \[ \int_0^\infty F^p(x)dx = \frac{p}{p-1}\int_0^\infty F^{p-1}(x)f(x)dx. \] Hölder's inequality gives us \[ \begin{aligned} \int_0^\infty F^{p-1}(x)f(x)dx &\leq \left[\int_0^\infty F^{(p-1)q}(x)dx\right]^{\frac{1}{q}}\left[\int_0^\infty f(x)^pdx\right]^{\frac{1}{p}} \\ &=\left[\int_0^\infty F^{p}(x)dx\right]^{\frac{1}{q}}\left[\int_0^\infty f(x)^pdx\right]^{\frac{1}{p}}. \end{aligned} \] Together with the identity above we get \[ \int_0^\infty F^p(x)dx = q\left[\int_0^\infty F^{p}(x)dx\right]^{\frac{1}{q}}\left[\int_0^\infty f(x)^pdx\right]^{\frac{1}{p}} \] which is exactly what we want since \(1-\frac{1}{q}=\frac{1}{p}\) and all we need to do is divide \(\left[\int_0^\infty F^pdx\right]^{1/q}\) on both sides. So what's next? Note \(C_c((0,\infty))\) is dense in \(L^p((0,\infty))\). For any \(f \in L^p((0,\infty))\), we can take a sequence of functions \(f_n \in C_c((0,\infty))\) such that \(f_n \to f\) with respect to \(L^p\)-norm. Taking \(F=\frac{1}{x}\int_0^x f(t)dt\) and \(F_n = \frac{1}{x}\int_0^x f_n(t)dt\), we need to show that \(F_n \to F\) pointwise, so that we can use Fatou's lemma. For \(\varepsilon>0\), there exists some \(m\) such that \(\lrVert[f_n-f]_p < \frac{1}{n}\). Thus \[ \begin{aligned} |F_n(x)-F(x)| &= \frac{1}{x}\left\vert \int_0^x f_n(t)dt - \int_0^x f(t)dt \right\vert \\ &\leq \frac{1}{x} \int_0^x |f_n(t)-f(t)|dt \\ &\leq \frac{1}{x} \left[\int_0^x|f_n(t)-f(t)|^pdt\right]^{\frac{1}{p}}\left[\int_0^x 1^qdt\right]^{\frac{1}{q}} \\ &=\frac{1}{x^{1/p}}\left[\int_0^x|f_n(t)-f(t)|^pdt\right]^{\frac{1}{p}} \\ &\leq \frac{1}{x^{1/p}}\lrVert[f_n-f]_p <\frac{\varepsilon}{x^{1/p}}. \end{aligned} \] Hence \(F_n \to F\) pointwise, which also implies that \(|F_n|^p \to |F|^p\) pointwise. For \(|F_n|\) we have \[ \begin{aligned} \int_0^\infty |F_n(x)|^pdx &= \int_0^\infty \left\vert\frac{1}{x}\int_0^x f_n(t)dt\right\vert^p dx \\ &\leq \int_0^\infty \left[\frac{1}{x}\int_0^x |f_n(t)|dt\right]^{p}dx \\ &\leq q\int_0^\infty |f_n(t)|^pdt \end{aligned} \] note the third inequality follows since we have already proved it for \(f \geq 0\). By Fatou's lemma, we have \[ \begin{aligned} \int_0^\infty |F(x)|^pdx &= \int_0^\infty \lim_{n \to \infty}|F_n(x)|^pdx \\ &\leq \lim_{n \to \infty} \int_0^\infty |F_n(x)|^pdx \\ &\leq \lim_{n \to \infty}q^p\int_0^\infty |f_n(x)|^pdx \\ &=q^p\int_0^\infty |f(x)|^pdx. \end{aligned} \]

Throughout, let \((X,\mathfrak{M},\mu)\) be a measure space where \(\mu\) is positive.

If \(f\) is of \(L^p(\mu)\), which means \(\lVert f \rVert_p=\left(\int_X |f|^p
d\mu\right)^{1/p}<\infty\), or equivalently \(\int_X |f|^p d\mu<\infty\), then we may
say \(|f|^p\) is of \(L^1(\mu)\). In other words, we have a
function \[
\begin{aligned}
\lambda: L^p(\mu) &\to L^1(\mu) \\
f &\mapsto |f|^p.
\end{aligned}
\] This function does not have to be one to one due to absolute
value. But we hope this function to be *fine* enough, at the very
least, we hope it is continuous.

Here, \(f \sim g\) means that \(f-g\) equals \(0\) almost everywhere with respect to \(\mu\). It can be easily verified that this is an equivalence relation.

We still use the \(\varepsilon-\delta\) argument but it's in a metric space. Suppose \((X,d_1)\) and \((Y,d_2)\) are two metric spaces and \(f:X \to Y\) is a function. We say \(f\) is continuous at \(x_0 \in X\) if, for any \(\varepsilon>0\), there exists some \(\delta>0\) such that \(d_2(f(x_0),f(x))<\varepsilon\) whenever \(d_1(x_0,x)<\delta\). Further, we say \(f\) is continuous on \(X\) if \(f\) is continuous at every point \(x \in X\).

For \(1\leq p<\infty\), we already have a metric by \[ d(f,g)=\lVert f-g \rVert_p \] given that \(d(f,g)=0\) if and only if \(f \sim g\). This is complete and makes \(L^p\) a Banach space. But for \(0<p<1\) (yes we are going to cover that), things are much more different, and there is one reason: Minkowski inequality holds reversely! In fact, we have \[ \lVert f+g \rVert_p \geq \lVert f \rVert_p + \lVert g \rVert_p \] for \(0<p<1\). \(L^p\) space has too many weird things when \(0<p<1\). Precisely,

For \(0<p<1\), \(L^p(\mu)\) is locally convex if and only if \(\mu\) assumes finitely many values. (Proof.)

On the other hand, for example, \(X=[0,1]\) and \(\mu=m\) be the Lebesgue measure, then \(L^p(\mu)\) has *no* open convex
subset other than \(\varnothing\) and
\(L^p(\mu)\) itself. However,

A topological vector space \(X\) is normable if and only if its origin has a convex bounded neighbourhood. (See Kolmogorov's normability criterion.)

Therefore \(L^p(m)\) is not normable, hence not Banach.

We have gone too far. We need a metric that is fine enough.

Define \[ \Delta(f)=\int_X |f|^p d\mu \] for \(f \in L^p(\mu)\). We will show that we have a metric by \[ d(f,g)=\Delta(f-g). \] Fix \(y\geq 0\), consider the function \[ f(x)=(x+y)^p-x^p. \] We have \(f(0)=y^p\) and \[ f'(x)=p(x+y)^{p-1}-px^{p-1} \leq px^{p-1}-px^{p-1}=0 \] when \(x > 0\) and hence \(f(x)\) is nonincreasing on \([0,\infty)\), which implies that \[ (x+y)^p \leq x^p+y^p. \] Hence for any \(f\), \(g \in L^p\), we have \[ \Delta(f+g)=\int_X |f+g|^p d\mu \leq \int_X |f|^p d\mu + \int_X |g|^p d\mu=\Delta(f)+\Delta(g). \] This inequality ensures that \[ d(f,g)=\Delta(f-g) \] is a metric. It's immediate that \(d(f,g)=d(g,f) \geq 0\) for all \(f\), \(g \in L^p(\mu)\). For the triangle inequality, note that \[ d(f,h)+d(g,h)=\Delta(f-h)+\Delta(h-g) \geq \Delta((f-h)+(h-g))=\Delta(f-g)=d(f,g). \] This is translate-invariant as well since \[ d(f+h,g+h)=\Delta(f+h-g-h)=\Delta(f-g)=d(f,g) \] The completeness can be verified in the same way as the case when \(p>1\). In fact, this metric makes \(L^p\) a locally bounded F-space.

The metric of \(L^1\) is defined by \[ d_1(f,g)=\lVert f-g \rVert_1=\int_X |f-g|d\mu. \] We need to find a relation between \(d_p(f,g)\) and \(d_1(\lambda(f),\lambda(g))\), where \(d_p\) is the metric of the corresponding \(L^p\) space.

As we have proved, \[ (x+y)^p \leq x^p+y^p. \] Without loss of generality we assume \(x \geq y\) and therefore \[ x^p=(x-y+y)^p \leq (x-y)^p+y^p. \] Hence \[ x^p-y^p \leq (x-y)^p. \] By interchanging \(x\) and \(y\), we get \[ |x^p-y^p| \leq |x-y|^p. \] Replacing \(x\) and \(y\) with \(|f|\) and \(|g|\) where \(f\), \(g \in L^p\), we get \[ \int_{X}\lvert |f|^p-|g|^p \rvert d\mu \leq \int_X |f-g|^p d\mu. \] But \[ d_1(\lambda(f),\lambda(g))=\int_{X}\lvert |f|^p-|g|^p \rvert d\mu \\ d_p(f,g)=\Delta(f-g)= d\mu \leq \int_X |f-g|^p d\mu \] and we therefore have \[ d_1(\lambda(f),\lambda(g)) \leq d_p(f,g). \] Hence \(\lambda\) is continuous (and in fact, Lipschitz continuous and uniformly continuous) when \(0<p<1\).

It's natural to think about Minkowski's inequality and Hölder's inequality in this case since they are critical inequality enablers. You need to think about some examples of how to create the condition to use them and get a fine result. In this section we need to prove that \[ |x^p-y^p| \leq p|x-y|(x^{p-1}+y^{p-1}). \] This inequality is surprisingly easy to prove however. We will use nothing but the mean value theorem. Without loss of generality we assume that \(x > y \geq 0\) and define \(f(t)=t^p\). Then \[ \frac{f(x)-f(y)}{x-y}=f'(\zeta)=p\zeta^{p-1} \] where \(y < \zeta < x\). But since \(p-1 \geq 0\), we see \(\zeta^{p-1} < x^{p-1} <x^{p-1}+y^{p-1}\). Therefore \[ f(x)-f(y)=x^p-y^p=p(x-y)\zeta^{p-1}<p(x-y)(x^{p-1}-y^{p-1}). \] For \(x=y\) the equality holds.

Therefore \[
\begin{aligned}
d_1(\lambda(f),\lambda(g)) &= \int_X \left||f|^p-|g|^p\right|d\mu \\
&\leq
\int_Xp\left||f|-|g|\right|(|f|^{p-1}+|g|^{p-1})d\mu
\end{aligned}
\] By *Hölder's inequality*, we have \[
\begin{aligned}
\int_X ||f|-|g||(|f|^{p-1}+|g|^{p-1})d\mu & \leq \left[\int_X
\left||f|-|g|\right|^pd\mu\right]^{1/p}\left[\int_X\left(|f|^{p-1}+|g|^{p-1}\right)^q\right]^{1/q}
\\
&\leq \left[\int_X
\left|f-g\right|^pd\mu\right]^{1/p}\left[\int_X\left(|f|^{p-1}+|g|^{p-1}\right)^q\right]^{1/q}
\\
&=\lVert f-g \rVert_p
\left[\int_X\left(|f|^{p-1}+|g|^{p-1}\right)^q\right]^{1/q}.
\end{aligned}
\] By *Minkowski's inequality*, we have \[
\left[\int_X\left(|f|^{p-1}+|g|^{p-1}\right)^q\right]^{1/q} \leq
\left[\int_X|f|^{(p-1)q}d\mu\right]^{1/q}+\left[\int_X
|g|^{(p-1)q}d\mu\right]^{1/q}
\] Now things are clear. Since \(1/p+1/q=1\), or equivalently \(1/q=(p-1)/p\), suppose \(\lVert f \rVert_p\), \(\lVert g \rVert_p \leq R\), then \((p-1)q=p\) and therefore \[
\left[\int_X|f|^{(p-1)q}d\mu\right]^{1/q}+\left[\int_X
|g|^{(p-1)q}d\mu\right]^{1/q} = \lVert f \rVert_p^{p-1}+\lVert g
\rVert_p^{p-1} \leq 2R^{p-1}.
\] Summing the inequalities above, we get \[
\begin{aligned}
d_1(\lambda(f),\lambda(g)) \leq 2pR^{p-1}\lVert f-g \rVert_p
=2pR^{p-1}d_p(f,g)
\end{aligned}
\] hence \(\lambda\) is
continuous.

We have proved that \(\lambda\) is continuous, and when \(0<p<1\), we have seen that \(\lambda\) is Lipschitz continuous. It's natural to think about its differentiability afterwards, but the absolute value function is not even differentiable so we may have no chance. But this is still a fine enough result. For example we have no restriction to \((X,\mathfrak{M},\mu)\) other than the positivity of \(\mu\). Therefore we may take \(\mathbb{R}^n\) as the Lebesgue measure space here, or we can take something else.

It's also interesting how we use elementary Calculus to solve some much more abstract problems.

*(Before everything: elementary background in topology and vector
spaces, in particular Banach spaces, is assumed.)*

We can define several relations between two norms. Suppose we have a
topological vector space \(X\) and two
norms \(\lVert \cdot \rVert_1\) and
\(\lVert \cdot \rVert_2\). One says
\(\lVert \cdot \rVert_1\) is
*weaker* than \(\lVert \cdot
\rVert_2\) if there is \(K>0\) such that \(\lVert x \rVert_1 \leq K \lVert x
\rVert_2\) for all \(x \in X\).
Two norms are *equivalent* if each is weaker than the other
(trivially this is a equivalence relation). The idea of stronger and
weaker norms is related to the idea of the "finer" and "coarser"
topologies in the setting of topological spaces.

So what about their limit? Unsurprisingly this can be verified with elementary \(\epsilon-N\) arguments. Suppose now \(\lVert x_n - x \rVert_1 \to 0\) as \(n \to 0\), we immediately have

\[ \lVert x_n - x \rVert_2 \leq K \lVert x_n-x \rVert_1 < K\varepsilon \]

for some large enough \(n\). Hence \(\lVert x_n - x \rVert_2 \to 0\) as well. But what about the converse? We give a new definition of equivalence relation between norms.

(Definition)Two norms \(\lVert \cdot \rVert_1\) and \(\lVert \cdot \rVert_2\) of a topological vector space arecompatibleif given that \(\lVert x_n - x \rVert_1 \to 0\) and \(\lVert x_n - y \rVert_2 \to 0\) as \(n \to \infty\), we have \(x=y\).

By the uniqueness of limit, we see if two norms are equivalent, then they are compatible. And surprisingly, with the help of the closed graph theorem we will discuss in this post, we have

(Theorem 1)If \(\lVert \cdot \rVert_1\) and \(\lVert \cdot \rVert_2\) are compatible, and both \((X,\lVert\cdot\rVert_1)\) and \((X,\lVert\cdot\rVert_2)\) are Banach, then \(\lVert\cdot\rVert_1\) and \(\lVert\cdot\rVert_2\) are equivalent.

This result looks natural but not seemingly easy to prove, since one find no way to build a bridge between the limit and a general inequality. But before that, we need to elaborate some terminologies.

(Definition)For \(f:X \to Y\), thegraphof \(f\) is defined by\[ G(f)=\{(x,f(x)) \in X \times Y:x \in X\}. \]

If both \(X\) and \(Y\) are topological spaces, and the topology of \(X \times Y\) is the usual one, that is, the smallest topology that contains all sets \(U \times V\) where \(U\) and \(V\) are open in \(X\) and \(Y\) respectively, and if \(f: X \to Y\) is continuous, it is natural to expect \(G(f)\) to be closed. For example, by taking \(f(x)=x\) and \(X=Y=\mathbb{R}\), one would expect the diagonal line of the plane to be closed.

(Definition)The topological space \((X,\tau)\) is an \(F\)-space if \(\tau\) is induced by a complete invariant metric \(d\). Here invariant means that \(d(x+z,y+z)=d(x,y)\) for all \(x,y,z \in X\).

A Banach space is easily to be verified to be a \(F\)-space by defining \(d(x,y)=\lVert x-y \rVert\).

(Open mapping theorem)See this post

By definition of closed set, we have a practical criterion on whether \(G(f)\) is closed.

(Proposition 1)\(G(f)\) is closed if and only if, for any sequence \((x_n)\) such that the limits\[ x=\lim_{n \to \infty}x_n \quad \text{ and }\quad y=\lim_{n \to \infty}f(x_n) \]

exist, we have \(y=f(x)\).

In this case, we say \(f\) is closed. For continuous functions, things are trivial.

(Proposition 2)If \(X\) and \(Y\) are two topological spaces and \(Y\) is Hausdorff, and \(f:X \to Y\) is continuous, then \(G(f)\) is closed.

*Proof.* Let \(G^c\) be the
complement of \(G(f)\) with respect to
\(X \times Y\). Fix \((x_0,y_0) \in G^c\), we see \(y_0 \neq f(x_0)\). By the Hausdorff
property of \(Y\), there exists some
open subsets \(U \subset Y\) and \(V \subset Y\) such that \(y_0 \in U\) and \(f(x_0) \in V\) and \(U \cap V = \varnothing\). Since \(f\) is continuous, we see \(W=f^{-1}(V)\) is open in \(X\). We obtained a open neighborhood \(W \times U\) containing \((x_0,y_0)\) which has empty intersection
with \(G(f)\). This is to say, every
point of \(G^c\) has a open
neighborhood contained in \(G^c\),
hence a interior point. Therefore \(G^c\) is open, which is to say that \(G(f)\) is closed. \(\square\)

**REMARKS.** For \(X \times
Y=\mathbb{R} \times \mathbb{R}\), we have a simple visualization.
For \(\varepsilon>0\), there exists
some \(\delta\) such that \(|f(x)-f(x_0)|<\varepsilon\) whenever
\(|x-x_0|<\delta\). For \(y_0 \neq f(x_0)\), pick \(\varepsilon\) such that \(0<\varepsilon<\frac{1}{2}|f(x_0)-y_0|\),
we have two boxes (\(CDEF\) and \(GHJI\) on the picture), namely

\[ B_1=\{(x,y):x_0-\delta<x<x_0+\delta,f(x_0)-\varepsilon<y<f(x_0)+\varepsilon\} \]

and

\[ B_2=\{(x,y):x_0-\delta<x<x_0+\delta,y_0-\varepsilon<y<y_0+\varepsilon\}. \]

In this case, \(B_2\) will not intersect the graph of \(f\), hence \((x_0,y_0)\) is an interior point of \(G^c\).

The Hausdorff property of \(Y\) is not removable. To see this, since \(X\) has no restriction, it suffices to take a look at \(X \times X\). Let \(f\) be the identity map (which is continuous), we see the graph

\[ G(f)=\{(x,x):x \in X\} \]

is the diagonal. Suppose \(X\) is
not Hausdorff, we reach a contradiction. By definition, there exists
some distinct \(x\) and \(y\) such that all neighborhoods of \(x\) contain \(y\). Pick \((x,y)
\in G^c\), then *all* neighborhoods of \((x,y) \in X \times X\) contain \((x,x)\) so \((x,y) \in G^c\) is *not* a interior
point of \(G^c\), hence \(G^c\) is not open.

Also, as an immediate consequence, every affine algebraic variety in
\(\mathbb{C}^n\) and \(\mathbb{R}^n\) is closed with respect to
Euclidean topology. Further, we have the Zariski topology \(\mathcal{Z}\) by claiming that, if \(V\) is an affine algebraic variety, then
\(V^c \in \mathcal{Z}\). It's worth
noting that \(\mathcal{Z}\) is
*not* Hausdorff (example?) and in fact much coarser than the
Euclidean topology although an affine algebraic variety is both closed
in the Zariski topology and the Euclidean topology.

After we have proved this theorem, we are able to prove the theorem about compatible norms. We shall assume that both \(X\) and \(Y\) are \(F\)-spaces, since the norm plays no critical role here. This offers a greater variety but shall not be considered as an abuse of abstraction.

(The Closed Graph Theorem)Suppose

\(X\) and \(Y\) are \(F\)-spaces,

\(f:X \to Y\) is linear,

\(G(f)\) is closed in \(X \times Y\).

Then \(f\) is continuous.

In short, the closed graph theorem gives a sufficient condition to claim the continuity of \(f\) (keep in mind, linearity does not imply continuity). If \(f:X \to Y\) is continuous, then \(G(f)\) is closed; if \(G(f)\) is closed and \(f\) is linear, then \(f\) is continuous.

*Proof.* First of all we should make \(X \times Y\) an \(F\)-space by assigning addition, scalar
multiplication and metric. Addition and scalar multiplication are
defined componentwise in the nature of things:

\[ \alpha(x_1,y_1)+\beta(x_2,y_2)=(\alpha x_1+\beta x_2,\alpha y_1 + \beta y_2). \]

The metric can be defined without extra effort:

\[ d((x_1,y_1),(x_2,y_2))=d_X(x_1,x_2)+d_Y(y_1,y_2). \]

Then it can be verified that \(X \times Y\) is a topological space with translate invariant metric. (Potentially the verifications will be added in the future but it's recommended to do it yourself.)

Since \(f\) is linear, the graph \(G(f)\) is a subspace of \(X \times Y\). Next we quote an elementary result in point-set topology, a subset of a complete metric space is closed if and only if it's complete, by the translate-invariance of \(d\), we see \(G(f)\) is an \(F\)-space as well. Let \(p_1: X \times Y \to X\) and \(p_2: X \times Y \to Y\) be the natural projections respectively (for example, \(p_1(x,y)=x\)). Our proof is done by verifying the properties of \(p_1\) and \(p_2\) on \(G(f)\).

*For simplicity one can simply define \(p_1\) on \(G(f)\) instead of the whole space \(X \times Y\), but we make it a global
projection on purpose to emphasize the difference between global
properties and local properties. One can also write \(p_1|_{G(f)}\) to dodge confusion.*

**Claim 1.** \(p_1\)
(with restriction on \(G(f)\)) defines
an isomorphism between \(G(f)\) and
\(X\).

For \(x \in X\), we see \(p_1(x,f(x)) = x\) (surjectivity). If \(p_1(x,f(x))=0\), we see \(x=0\) and therefore \((x,f(x))=(0,0)\), hence the restriction of \(p_1\) on \(G\) has trivial kernel (injectivity). Further, it's trivial that \(p_1\) is linear.

**Claim 2.** \(p_1\) is
continuous on \(G(f)\).

For every sequence \((x_n)\) such that \(\lim_{n \to \infty}x_n=x\), we have \(\lim_{n \to \infty}f(x_n)=f(x)\) since \(G(f)\) is closed, and therefore \(\lim_{n \to \infty}p_1(x_n,f(x_n)) =x\). Meanwhile \(p_1(x,f(x))=x\). The continuity of \(p_1\) is proved.

**Claim 3.** \(p_1\) is
a homeomorphism with restriction on \(G(f)\).

We already know that \(G(f)\) is an \(F\)-space, so is \(X\). For \(p_1\) we have \(p_1(G(f))=X\) is of the second category (since it's an \(F\)-space and \(p_1\) is one-to-one), and \(p_1\) is continuous and linear on \(G(f)\). By the open mapping theorem, \(p_1\) is an open mapping on \(G(f)\), hence is a homeomorphism thereafter.

**Claim 4.** \(p_2\) is
continuous.

This follows the same way as the proof of claim 2 but much easier since there is no need to care about \(f\).

Now things are immediate once one realises that \(f=p_2 \circ p_1|_{G(f)}^{-1}\), which implies that \(f\) is continuous. \(\square\)

Before we go for theorem 1 at the beginning, we drop an application on Hilbert spaces.

Let \(T\) be a bounded operator on the Hilbert space \(L_2([0,1])\) so that if \(\phi \in L_2([0,1])\) is a continuous function so is \(T\phi\). Then the restriction of \(T\) to \(C([0,1])\) is a bounded operator of \(C([0,1])\).

For details please check this.

Now we go for the identification of norms. Define

\[ \begin{aligned} f:(X,\lVert\cdot\rVert_1) &\to (X,\lVert\cdot\rVert_2) \\ x &\mapsto x \end{aligned} \]

i.e. the identity map between two Banach spaces (hence \(F\)-spaces). Then \(f\) is linear. We need to prove that \(G(f)\) is closed. For the convergent sequence \((x_n)\)

\[ \lim_{n \to \infty}\lVert x_n -x \rVert_1=0, \]

we have

\[ \lim_{n \to \infty} \lVert f(x_n)-x \rVert_2=\lim_{n \to \infty}\lVert x_n -x\rVert_2=\lim_{n \to \infty}\lVert f(x_n)-f(x)\rVert_2=0. \]

Hence \(G(f)\) is closed. Therefore \(f\) is continuous, hence bounded, we have some \(K\) such that

\[ \lVert x \rVert_2 =\lVert f(x) \rVert_1 \leq K \lVert x \rVert_1. \]

By defining

\[ \begin{aligned} g:(X,\lVert\cdot\rVert_2) &\to (X,\lVert\cdot\rVert_1) \\ x &\mapsto x \end{aligned} \]

we see \(g\) is continuous as well, hence we have some \(K'\) such that

\[ \lVert x \rVert_1 =\lVert g(x) \rVert_2 \leq K'\lVert x \rVert_2 \]

Hence two norms are weaker than each other.

Since there is no strong reason to write more posts on this topic, i.e. the three fundamental theorems of linear functional analysis, I think it's time to make a list of the series. It's been around half a year.

- The Big Three Pt. 1 - Baire Category Theorem Explained
- The Big Three Pt. 2 - The Banach-Steinhaus Theorem
- The Big Three Pt. 3 - The Open Mapping Theorem (Banach Space)
- The Big Three Pt. 4 - The Open Mapping Theorem (F-Space)
- The Big Three Pt. 5 - The Hahn-Banach Theorem (Dominated Extension)
- The Big Three Pt. 6 - Closed Graph Theorem with Applications

- Walter Rudin,
*Functional Analysis* - Peter Lax,
*Functional Analysis* - Jesús Gil de Lamadrid,
*Some Simple Applications of the Closed Graph Theorem*

(Gleason-Kahane-Żelazko)If \(\phi\) is a complex linear functional on a unitary Banach algebra \(A\), such that \(\phi(e)=1\) and \(\phi(x) \neq 0\) for every invertible \(x \in A\), then \[ \phi(xy)=\phi(x)\phi(y) \] Namely, \(\phi\) is a complex homomorphism.

Suppose \(A\) is a complex unitary
Banach algebra and \(\phi: A \to
\mathbb{C}\) is a linear functional which is not identically
\(0\) (for convenience), and if \[
\phi(xy)=\phi(x)\phi(y)
\] for all \(x \in A\) and \(y \in A\), then \(\phi\) is called a *complex
homomorphism* on \(A\). Note that a
unitary Banach algebra (with \(e\) as
multiplicative unit) is also a ring, so is \(\mathbb{C}\), we may say in this case \(\phi\) is a ring-homomorphism. For such
\(\phi\), we have an instant
proposition:

Proposition 0\(\phi(e)=1\) and \(\phi(x) \neq 0\) for every invertible \(x \in A\).

*Proof.* Since \(\phi(e)=\phi(ee)=\phi(e)\phi(e)\), we have
\(\phi(e)=0\) or \(\phi(e)=1\). If \(\phi(e)=0\) however, for any \(y \in A\), we have \(\phi(y)=\phi(ye)=\phi(y)\phi(e)=0\), which
is an excluded case. Hence \(\phi(e)=1\).

For invertible \(x \in A\), note that \(\phi(xx^{-1})=\phi(x)\phi(x^{-1})=\phi(e)=1\). This can't happen if \(\phi(x)=0\). \(\square\)

The theorem reveals that Proposition \(0\) actually characterizes the complex homomorphisms (ring-homomorphisms) among the linear functionals (group-homomorphisms).

This theorem was proved by Andrew M. Gleason in 1967 and later independently by J.-P. Kahane and W. Żelazko in 1968. Both of them worked mainly on commutative Banach algebras, and the non-commutative version, which focused on complex homomorphism, was by W. Żelazko. In this post we will follow the third one.

Unfortunately, one cannot find an educational proof on the Internet with ease, which may be the reason why I write this post and why you read this.

Following definitions of Banach algebra and some logic manipulation, we have several equivalences worth noting.

(Stated by Gleason)Let \(M\) be a linear subspace of codimension one in a commutative Banach algebra \(A\) having an identity. Suppose no element of \(M\) is invertible, then \(M\) is an ideal.

(Stated by Kahane and Żelazko)A subspace \(X \subset A\) of codimension \(1\) is a maximal ideal if and only if it consists of non-invertible elements.

(Stated by Kahane and Żelazko)Let \(A\) be a commutative complex Banach algebra with unit element. Then a functional \(f \in A^\ast\) is a multiplicative linear functional if and only if \(f(x)=\sigma(x)\) holds for all \(x \in A\).

Here \(\sigma(x)\) denotes the spectrum of \(x\).

Clearly any maximal ideal contains no invertible element (if so, then it contains \(e\), then it's the ring itself). So it suffices to show that it has codimension 1, and if it consists of non-invertible elements. Also note that every maximal ideal is the kernel of some complex homomorphism. For such a subspace \(X \subset A\), since \(e \notin X\), we may define \(\phi\) so that \(\phi(e)=1\), and \(\phi(x) \in \sigma(x)\) for all \(x \in A\). Note that \(\phi(e)=1\) holds if and only if \(\phi(x) \in \sigma(x)\). As we will show, \(\phi\) has to be a complex homomorphism.

Lemma 0Suppose \(A\) is a unitary Banach algebra, \(x \in A\), \(\lVert x \rVert<1\), then \(e-x\) is invertible.

This lemma can be found in any functional analysis book introducing Banach algebra.

Lemma 1Suppose \(f\) is an entire function of one complex variable, \(f(0)=1\), \(f'(0)=0\), and \[ 0<|f(\lambda)| \leq e^{|\lambda|} \] for all complex \(\lambda\), then \(f(\lambda)=1\) for all \(\lambda \in \mathbb{C}\).

Note that there is an entire function \(g\) such that \(f=\exp(g)\). It can be shown that \(g=0\). Indeed, if we put \[ h_r(\lambda) = \frac{r^2g(\lambda)}{\lambda^2[2r-g(\lambda)]} \] then we see \(h_r\) is holomorphic in the open disk centred at \(0\) with radius \(2r\). Besides, \(|h_r(\lambda)| \leq 1\) if \(|\lambda|=r\). By the maximum modulus theorem, we have \[ |h_r(\lambda)| \leq 1 \] whenever \(|\lambda| \leq r\). Fix \(\lambda\) and let \(r \to \infty\), by definition of \(h_r(\lambda)\), we must have \(g(\lambda)=0\).

A mapping \(\phi\) from one ring
\(R\) to another ring \(R'\) is said to be a **Jordan
homomorphism** from \(R\) to
\(R'\) if \[
\phi(a+b)=\phi(a)+\phi(b)
\] and \[
\phi(ab+ba)=\phi(a)\phi(b)+\phi(b)\phi(a).
\] It's of course clear that every homomorphism is Jordan. Note
if \(R'\) is not of characteristic
\(2\), the second identity is
equivalent to \[
\phi(a^2)=\phi(a)^2.
\] *To show the equivalence, one let \(b=a\) in the first case and puts \(a+b\) in place of \(a\) in the second case.*

Since in this case \(R=A\) and \(R'=\mathbb{C}\), the latter of which is commutative, we also write \[ \phi(ab+ba)=2\phi(a)\phi(b). \] As we will show, the \(\phi\) in the theorem is a Jordan homomorphism.

We will follow an unusual approach. By keep 'downgrading' the goal, one will see this algebraic problem be transformed into a pure analysis problem neatly.

To begin with, let \(N\) be the kernel of \(\phi\).

If \(\phi\) is a complex homomorphism, it is immediate that \(\phi\) is a Jordan homomorphism. Conversely, if \(\phi\) is Jordan, we have \[ \phi(xy+yx) =2\phi(x)\phi(y). \] If \(x\in N\), the right hand becomes \(0\), and therefore \[ xy+yx \in N \quad \text{if } x \in N, y \in A. \] Consider the identity \[ (xy-yx)^2+(xy+yx)^2=2[x(yxy)+(yxy)x] \]

Therefore \[ \begin{aligned} \phi((xy-yx)^2+(xy+yx)^2)&=\phi((xy-yx)^2)+\phi((xy+yx)^2) \\ &=\phi(xy-yx)^2+\phi(xy+yx)^2 \\ &= \phi(xy-yx)^2 \\ &=2\phi[x(yxy)+(yxy)x] \\ &=0 \end{aligned} \] Since \(x \in N\) and \(yxy \in A\), we see \(x(yxy)+(yxy)x \in N\). Therefore \(\phi(xy-yx)=0\) and \[ xy-yx \in N \] if \(x \in N\) and \(y \in A\). Further we see \[ xy-yx+xy+yx=2xy \in N \quad \text {and}\quad xy+yx-xy+yx = 2yx \in N, \] which implies that \(N\) is an ideal. This may remind you of this classic diagram (we will not use it since it is additive though):

For \(x,y \in A\), we have \(x \in \phi(x)e+N\) and \(y \in \phi(y)e+N\). As a result, \(xy \in \phi(x)\phi(y)e+N\), and therefore \[ \phi(xy)=\phi(x)\phi(y)+0. \]

Again, if \(\phi\) is Jordan, we have \(\phi(x^2)=\phi(x)^2\) for all \(x \in A\). Conversely, if \(\phi(a^2)=0\) for all \(a \in N\), we may write \(x\) by \[ x=\phi(x)e+a \] where \(a \in N\) for all \(x \in A\). Therefore \[ \begin{aligned} \phi(x^2)&=\phi((\phi(x)e+a)^2)=\phi(x)^2+2\phi(x)\phi(a)+\phi(a)^2=\phi(x)^2, \end{aligned} \] which also shows that \(\phi\) is Jordan.

Fix \(a \in N\), assume \(\lVert a \rVert = 1\) without loss of generality, and define \[ f(\lambda)=\sum_{n=0}^{\infty}\frac{\phi(a^n)}{n!}\lambda^n \] for all complex \(\lambda\). If this function is constant (lemma 1), we immediately have \(f''(0)=\phi(a^2)=0\). This is purely a complex analysis problem however.

Note in the definition of \(f\), we
have \[
\lvert \phi(a^n) \rvert \leq \lVert \phi \rVert \lVert a^n \rVert \leq
\lVert \phi \rVert \lVert a \rVert^n=\lVert \phi \rVert.
\] So we expect the norm of \(\phi\) to be finite, which ensures that
\(f\) is entire. By *reductio ad
absurdum*, if \(\lVert e-a \rVert <
1\) for \(a \in N\), by lemma 0,
we have \(e-e+a=a\) to be invertible,
which is impossible. Hence \(\lVert e-a \rVert
\geq 1\) for all \(a \in N\). On
the other hand, for \(\lambda \in
\mathbb{C}\), we have the following inequality: \[
\begin{aligned}
\lVert \lambda e-a \rVert = \lambda\lVert e-\lambda^{-1}a \rVert
&\geq|\lambda| \\
&= |\phi(\lambda e)-\phi(a)| \\
&= |\phi(\lambda e-a)|
\end{aligned}
\] Therefore \(\phi\) is
*continuous* with norm less than \(1\). The continuity of \(\phi\) is not assumed at the beginning but
proved here.

For \(f\) we have some immediate facts. Since each coefficient in the series of \(f\) has finite norm, \(f\) is entire with \(f'(0)=\phi(a)=0\). Also, since \(\phi\) has norm \(1\), we also have \[ |f(\lambda)|=\left|\sum_{n=0}^{\infty}\frac{\phi(a^n)}{n!}\lambda^n\right| \leq \sum_{n=0}^{\infty}\frac{|\lambda^n|}{n!}=e^{|\lambda|}. \] All we need in the end is to show that \(f(\lambda) \neq 0\) for all \(\lambda \in \mathbb{C}\).

The series \[
E(\lambda)=\exp(a\lambda)=\sum_{n=0}^{\infty}\frac{(\lambda a)^n}{n!}
\] converges since \(\lVert a
\rVert=1\). The continuity of \(\phi\) shows now \[
f(\lambda)=\phi(E(\lambda)).
\] Note \[
E(-\lambda)E(\lambda)=\left(\sum_{n=0}^{\infty}\frac{(-\lambda
a)^n}{n!}\right)\left(\sum_{n=0}^{\infty}\frac{(\lambda
a)^n}{n!}\right)=e.
\] Hence \(E(\lambda)\)
*is* invertible for all \(\lambda \in
C\), hence \(f(\lambda)=\phi(E(\lambda)) \neq 0\). By
lemma 1, \(f(\lambda)=1\) is constant.
The proof is completed by reversing the steps. \(\square\)

- Walter Rudin,
*Real and Complex Analysis* - Walter Rudin,
*Functional Analysis* - Andrew M. Gleason,
*A Characterization of Maximal Ideals* - J.-P. Kahane and W. Żelazko,
*A Characterization of Maximal Ideals in Commutative Banach Algebras* - W. Żelazko
*A Characterization of Multiplicative linear functionals in Complex Banach Algebras* - I. N. Herstein,
*Jordan Homomorphisms*

The Hahn-Banach theorem has been a central tool for functional analysis and therefore enjoys a wide variety, many of which have a numerous uses in other fields of mathematics. Therefore it's not possible to cover all of them. In this post we are covering two 'abstract enough' results, which are sometimes called the dominated extension theorem. Both of them will be discussed in real vector space where topology is not endowed. This allows us to discuss any topological vector space.

Another interesting thing is, we will be using axiom of choice, or whatever equivalence you may like, for example Zorn's lemma or well-ordering principle. Before everything, we need to examine more properties of vector spaces.

It's obvious that every complex vector space is also a real vector space. Suppose \(X\) is a complex vector space, and we shall give the definition of real-linear and complex-linear functionals.

An addictive functional \(\Lambda\) on \(X\) is called

real-linear(complex-linear) if \(\Lambda(\alpha x)=\alpha\Lambda(x)\) for every \(x \in X\) and for every real (complex) scalar \(\alpha\).

For *-linear functionals, we have two important but easy theorems.

If \(u\) is the real part of a complex-linear functional \(f\) on \(X\), then \(u\) is real-linear and \[ f(x)=u(x)-iu(ix) \quad (x \in X). \]

*Proof.* For complex \(f(x)=u(x)+iv(x)\), it suffices to denote
\(v(x)\) correctly. But \[
if(x)=iu(x)-v(x),
\] we see \(\Im(f(x)=v(x)=-\Re(if(x))\). Therefore
\[
f(x)=u(x)-i\Re(if(x))=u(x)-i\Re(f(ix))
\] but \(\Re(f(ix))=u(ix)\), we
get \[
f(x)=u(x)-iu(ix).
\] To show that \(u(x)\) is
real-linear, note that \[
f(x+y)=u(x+y)+iv(x+y)=f(x)+f(y)=u(x)+u(y)+i(v(x)+v(y)).
\] Therefore \(u(x)+u(y)=u(x+y)\). Similar process can be
applied to real scalar \(\alpha\).
\(\square\)

Conversely, we are able to generate a complex-linear functional by a real one.

If \(u\) is a real-linear functional, then \(f(x)=u(x)-iu(ix)\) is a complex-linear functional

*Proof.* Direct computation. \(\square\)

Suppose now \(X\) is a complex topological vector space, we see a complex-linear functional on \(X\) is continuous if and only if its real part is continuous. Every continuous real-linear \(u: X \to \mathbb{R}\) is the real part of a unique complex-linear continuous functional \(f\).

Sublinear functional is 'almost' linear but also 'almost' a norm. Explicitly, we say \(p: X \to \mathbb{R}\) a sublinear functional when it satisfies \[ \begin{aligned} p(x)+p(y) &\leq p(x+y) \\ p(tx) &= tp(x) \\ \end{aligned} \] for all \(t \geq 0\). As one can see, if \(X\) is normable, then \(p(x)=\lVert x \rVert\) is a sublinear functional. One should not be confused with semilinear functional, where inequality is not involved. Another thing worth noting is that \(p\) is not restricted to be nonnegative.

A seminorm on a vector space \(X\) is a real-valued function \(p\) on \(X\) such that \[ \begin{aligned} p(x+y) &\leq p(x)+p(y) \\ p(\alpha x)&=|\alpha|p(x) \end{aligned} \] for all \(x,y \in X\) and scalar \(\alpha\).

Obviously a seminorm is also a sublinear functional. For the
connection between norm and seminorm, one shall note that *\(p\) is a norm if and only if it satisfies
\(p(x) \neq 0\) if \(x \neq 0\).*

Are the results will be covered in this post. Generally speaking, we are able to extend a functional defined on a subspace to the whole space as long as it's dominated by a sublinear functional. This is similar to the dominated convergence theorem, which states that if a convergent sequence of measurable functions are dominated by another function, then the convergence holds under the integral operator.

(Hahn-Banach)Suppose

- \(M\) is a subspace of a real vector space \(X\),
- \(f: M \to \mathbb{R}\) is linear and \(f(x) \leq p(x)\) on \(M\) where \(p\) is a sublinear functional on \(X\)
Then there exists a linear \(\Lambda: X \to \mathbb{R}\) such that \[ \Lambda(x)=f(x) \] for all \(x \in M\) and \[ -p(-x) \leq \Lambda(x) \leq p(x) \] for all \(x \in X\).

With that being said, if \(f(x)\) is dominated by a sublinear functional, then we are able to extend this functional to the whole space with a relatively proper range.

*Proof.* If \(M=X\) we have
nothing to do. So suppose now \(M\) is
a nontrivial proper subspace of \(X\).
Choose \(x_1 \in X-M\) and define \[
M_1=\{x+tx_1:x \in M,t \in R\}.
\] It's easy to verify that \(M_1\) satisfies all axioms of vector space
(warning again: no topology is endowed). Now we will be using the
properties of sublinear functionals.

Since \[ f(x)+f(y)=f(x+y) \leq p(x+y) \leq p(x-x_1)+p(x_1+y) \] for all \(x,y \in M\), we have \[ f(x)-p(x-x_1) \leq p(x_1+y) -f(y). \] Let \[ \alpha=\sup_{x}\{f(x)-p(x-x_1):x \in M\}. \] By definition, we naturally get \[ f(x)-\alpha \leq p(x-x_1) \] and \[ f(y)+\alpha \leq p(x_1+y). \] Define \(f_1\) on \(M_1\) by \[ f_1(x+tx_1)=f(x)+t\alpha. \] So when \(x +tx_1 \in M\), we have \(t=0\), and therefore \(f_1=f\).

To show that \(f_1 \leq p\) on \(M_1\), note that for \(t>0\), we have \[ f(x/t)-\alpha \leq p(x/t-x_1), \] which implies \[ f(x)-t\alpha=f_1(x-t\alpha)\leq p(x-tx_1). \] Similarly, \[ f(y/t)+\alpha \leq p(y/t+x_1), \] and therefore \[ f(y)+t\alpha=f_1(y+tx_1) \leq p(y+tx_1). \] Hence \(f_1 \leq p\).

It seems that we can never stop using step 1 to extend \(M\) to a larger space, but we have to extend. (If \(X\) is a finite dimensional space, then this is merely a linear algebra problem.) This meets exactly what William Timothy Gowers said in his blog post:

If you are building a mathematical object in stages and find that (i) you have not finished even after infinitely many stages, and (ii) there seems to be nothing to stop you continuing to build, then Zorn’s lemma may well be able to help you.

-- How to use Zorn's lemma

And we will show that, as W. T. Gowers said,

If the resulting partial order satisfies the chain condition and if a maximal element must be a structure of the kind one is trying to build, then the proof is complete.

To apply Zorn's lemma, we need to construct a partially ordered set. Let \(\mathscr{P}\) be the collection of all ordered pairs \((M',f')\) where \(M'\) is a subspace of \(X\) containing \(M\) and \(f'\) is a linear functional on \(M'\) that extends \(f\) and satisfies \(f' \leq p\) on \(M'\). For example we have \[ (M,f) , (M_1,f_1) \subset \mathscr{P}. \] The partial order \(\leq\) is defined as follows. By \((M',f') \leq (M'',f'')\), we mean \(M' \subset M''\) and \(f' = f''\) on \(M'\). Obviously this is a partial order (you should be able to check this).

Suppose now \(\mathcal{F}\) is a chain (totally ordered subset of \(\mathscr{P}\)). We claim that \(\mathcal{F}\) has an upper bound (which is required by Zorn's lemma). Let \[ M_0=\bigcup_{(M',f') \in \mathcal{F}}M' \] and \[ f_0(y)=f(y) \] whenever \((M',f') \in \mathcal{F}\) and \(y \in M'\). It's easy to verify that \((M_0,f_0)\) is the upper bound we are looking for. But \(\mathcal{F}\) is arbitrary, therefore by Zorn's lemma, there exists a maximal element \((M^\ast,f^\ast)\) in \(\mathscr{P}\). If \(M^* \neq X\), according to step 1, we are able to extend \(M^\ast\), which contradicts the maximality of \(M^\ast\). And \(\Lambda\) is defined to be \(f^\ast\). By the linearity of \(\Lambda\), we see \[ -p(-x) \leq -\Lambda(-x)=\Lambda{x}. \] The theorem is proved. \(\square\)

This is a classic application of Zorn's lemma (well-ordering principle, or Hausdorff maximality theorem). First, we showed that we are able to extend \(M\) and \(f\). But since we do not know the dimension or other properties of \(X\), it's not easy to control the extension which finally 'converges' to \((X,\Lambda)\). However, Zorn's lemma saved us from this random exploration: Whatever happens, the maximal element is there, and take it to finish the proof.

Since inequality is appeared in the theorem above, we need more careful validation.

(Bohnenblust-Sobczyk-Soukhomlinoff)Suppose \(M\) is a subspace of a vector space \(X\), \(p\) is a seminorm on \(X\), and \(f\) is a linear functional on \(M\) such that \[ |f(x)| \leq p(x) \] for all \(x \in M\). Then \(f\) extends to a linear functional \(\Lambda\) on \(X\) satisfying \[ |\Lambda (x)| \leq p(x) \] for all \(x \in X\).

*Proof.* If the scalar field is \(\mathbb{R}\), then we are done, since \(p(-x)=p(x)\) in this case (can you see
why?). So we assume the scalar field is \(\mathbb{C}\).

Put \(u = \Re f\). By dominated extension theorem, there is some real-linear functional \(U\) such that \(U(x)=u\) on \(M\) and \(U \leq p\) on \(X\). And here we have \[ \Lambda(x)=U(x)-iU(ix) \] where \(\Lambda(x)=f(x)\) on \(M\).

To show that \(|\Lambda(x)| \leq p(x)\) for \(x \neq 0\), by taking \(\alpha=\frac{|\Lambda(x)|}{\Lambda(x)}\), we have \[ U(\alpha{x})=\Lambda(\alpha{x})=|\Lambda(x)|\leq p(\alpha x)=p(x) \] since \(|\alpha|=1\) and \(p(\alpha{x})=|\alpha|p(x)=p(x)\). \(\square\)

To end this post, we state a beautiful and useful extension of the Hahn-Banach theorem, which is done by R. P. Agnew and A. P. Morse.

(Agnew-Morse)Let \(X\) denote a real vector space and \(\mathcal{A}\) be a collection of linear maps \(A_\alpha: X \to X\) that commute, or namely \[ A_\alpha A_\beta=A_\beta A_\alpha \] for all \(A_\alpha,A_\beta \in \mathcal{A}\). Let \(p\) be a sublinear functional such that \[ p(A_\alpha{x})=p(x) \] for all \(A_\alpha \in \mathcal{A}\). Let \(Y\) be a subspace of \(X\) on which a linear functional \(f\) is defined such that

- \(f(y) \leq p(y)\) for all \(y \in Y\).
- For each mapping \(A\) and \(y \in Y\), we have \(Ay \in Y\).
- Under the hypothesis of 2, we have \(f(Ay)=f(y)\).
Then \(f\) can be extended to \(X\) by \(\Lambda\) so that \(-p(-x) \leq \Lambda(x) \leq p(x)\) for all \(x \in X\), and \[ \Lambda(A_\alpha{x})=\Lambda{x}. \]

To prove this theorem, we need to construct a sublinear functional
that dominates \(f\). For the whole
proof, see *Functional Analysis* by Peter Lax.

Since there is no strong reason to write more posts on this topic, i.e. the three fundamental theorems of linear functional analysis, I think it's time to make a list of the series. It's been around half a year.

- The Big Three Pt. 1 - Baire Category Theorem Explained
- The Big Three Pt. 2 - The Banach-Steinhaus Theorem
- The Big Three Pt. 3 - The Open Mapping Theorem (Banach Space)
- The Big Three Pt. 4 - The Open Mapping Theorem (F-Space)
- The Big Three Pt. 5 - The Hahn-Banach Theorem (Dominated Extension)
- The Big Three Pt. 6 - Closed Graph Theorem with Applications

- Walter Rudin,
*Functional Analysis*. - Peter Lax,
*Functional Analysis*. - William Timothy Gowers,
*How to use Zorn's lemma*.

In this post we compute the Fourier transform of $\sin{x}/x$ and $(\sin{x}/x)^2$ through contour integration.

Read moreIs intended to establish the existence of the Lebesgue measure in the future, which is often denoted by \(m\). In fact, the Lebesgue measure follows as a special case of R-M-K representation theorem. You may not believe it, but euclidean properties of \(\mathbb{R}^k\) plays no role in the existence of \(m\). The only topological property that works is the fact that \(\mathbb{R}^k\) is a locally compact Hausdorff space.

The theorem is named after F. Riesz who introduced it for continuous functions on \([0,1]\) (with respect to Riemann-Steiltjes integral). Years later, after the generalization done by A. Markov and S. Kakutani, we are able to view it on a locally compact Hausdorff space.

You may find there are some over-generalized properties, but this is intended to have you being able to enjoy more alongside (there are some tools related to differential geometry). Also there are many topology and analysis tricks worth your attention.

Again, euclidean topology plays no role in this proof. We need to specify the topology for different reasons. This is similar to what we do in linear functional analysis. Throughout, let \(X\) be a topological space.

**0.0 Definition.** \(X\) is a *Hausdorff space* if the
following is true: If \(p \in X\),
\(q\in X\) but \(p \neq q\), then there are two
**disjoint** open sets \(U\) and \(V\) such that \(p
\in U\) and \(q \in V\).

**0.1 Definition.** \(X\) is *locally compact* if every
point of \(X\) has a neighborhood whose
closure is compact.

**0.2 Remarks.** A Hausdorff space is also called a
\(T_2\) space (see Kolmogorov
classification) or a separated space. There is a classic example of
locally compact Hausdorff space: \(\mathbb{R}^n\). It is trivial to verify
this. But this is far from being enough. In the future we will see, we
can construct some ridiculous but mathematically valid measures.

**0.3 Definition.** A set \(E
\subset X\) is called *\(\sigma\)-compact* if \(E\) is a countable union of compact sets.
Note that every open subset in a euclidean space \(\mathbb{R}^n\) is \(\sigma\)-compact since it can always be a
countable union of closed balls (which is compact).

**0.4 Definition.** A covering of \(X\) is *locally finite* if every
point has a neighborhood which intersects only finitely many elements of
the covering. Of course, if the covering is already finite, it's also
locally finite.

**0.5 Definition.** A *refinement* of a covering
of \(X\) is a second covering, each
element of which is contained in an element of the first covering.

**0.6 Definition.** \(X\) is *paracompact* if it is
Hausdorff, and every open covering has a locally finite open refinement.
Obviously any compact space is paracompact.

**0.7 Theorem.** If \(X\) is a second countable Hausdorff space
and is locally compact, then \(X\) is
paracompact. For proof, see this
[Theorem 2.6]. One uses this to prove that a differentiable manifold
admits a partition of unity.

**0.8 Theorem.** If \(X\) is locally compact and sigma compact,
then \(X=\bigcup_{i=1}^{\infty}K_i\)
where for all \(i \in \mathbb{N}\),
\(K_i\) is compact and \(K_i \subset\operatorname{int}K_{i+1}\).

The basic technical tool in the theory of differential manifolds is the existence of a partition of unity. We will steal this tool for the application of analysis theory.

**1.0 Definition.** A **partition of
unity** on \(X\) is a collection
\((g_i)\) of continuous real valued
functions on \(X\) such that

- \(g_i \geq 0\) for each \(i\).
- every \(x \in X\) has a neighborhood \(U\) such that \(U \cap \operatorname{supp}(g_i)=\varnothing\) for all but finitely many of \(g_i\).
- for each \(x \in X\), we have \(\sum_{i}g_i(x)=1\). (That's why you see the word 'unity'.)

One should be reminded that, partition of unity is frequently used in many other fields. For example, in differential geometry, one uses it to find Riemannian structure on a smooth manifold. In generalised function theory, one uses it to find the connection between local property and global property as well.

**1.1 Definition.** A partition of unity \((g_i)\) on \(X\) is *subordinate* to an open
cover of \(X\) if and only if for each
\(g_i\) there is an element \(U\) of the cover such that \(\operatorname{supp}(g_i) \subset U\). We
say \(X\) *admits* partitions of
unity if and only if for every open cover of \(X\), there exists a partition of unity
subordinate to the cover.

**1.2 Theorem.** A Hausdorff space admits a partition of
unity if and only if it is paracompact (the 'only if' part is by
considering the definition of partition of unity. For the 'if' part, see
here).
As a corollary, we have:

**1.3 Corollary.** Suppose \(V_1,\cdots,V_n\) are open subsets of a
locally compact Hausdorff space \(X\),
\(K\) is compact, and \[
K \subset \bigcup_{k=1}^{n}V_k.
\] Then there exists a partition of unity \((h_i)\) that is subordinate to the cover
\((V_n)\) such that \(\operatorname{supp}(h_i) \subset V_i\) and
\(\sum_{i=1}^{n}h_i=1\) for all \(x \in K\).

**2.0 Notation.** The notation \[
K \prec f
\] will mean that \(K\) is a
compact subset of \(X\), that \(f \in C_c(X)\), that \(f(X) \subset [0,1]\), and that \(f(x)=1\) for all \(x \in K\). The notation \[
f \prec V
\] will mean that \(V\) is open,
that \(f \in C_c(X)\), that \(f(X) \subset [0,1]\) and that \(\operatorname{supp}(f) \subset V\). If both
hold, we write \[
K \prec f \prec V.
\] **2.1 Remarks.** Clearly, with this notation, we
are able to simplify the statement of being subordinate. We merely need
to write \(g_i \prec U\) in 1.1 instead
of \(\operatorname{supp}(g_i) \subset
U\).

**2.2 Urysohn's Lemma for locally compact Hausdorff
space.** Suppose \(X\) is
locally compact and Hausdorff, \(V\) is
open in \(X\) and \(K \subset V\) is a compact set. Then there
exists an \(f \in C_c(X)\) such that
\[
K \prec f \prec V.
\] **2.3 Remarks.** By \(f
\in C_c(X)\) we shall mean \(f\)
is a continuous function with a compact support. This relation also says
that \(\chi_K \leq f \leq \chi_V\). For
more details and the proof, visit this
page. This lemma is generally for normal space, for a proof on that
level, see arXiv:1910.10381.
(Question: why we consider two disjoint closed subsets thereafter?)

We will be using the \(\varepsilon\)-definitions of \(\sup\) and \(\inf\), which will makes the proof easier in this case, but if you don't know it would be troublesome. So we need to put it down here.

Let \(S\) be a nonempty subset of the real numbers that is bounded below. The lower bound \(w\) is to be the infimum of \(S\) if and only if for any \(\varepsilon>0\), there exists an element \(x_\varepsilon \in S\) such that \(x_\varepsilon<w+\varepsilon\).

This definition of \(\inf\) is equivalent to the if-then definition by

Let \(S\) be a set that is bounded below. We say \(w=\inf S\) when \(w\) satisfies the following condition.

- \(w\) is a lower bound of \(S\).
- If \(t\) is also a lower bound of \(S\), then \(t \leq s\).

We have the analogous definition for \(\sup\).

Analysis is full of vector spaces and linear transformations. We
already know that the Lebesgue integral induces a linear functional.
That is, for example, \(L^1([0,1])\) is
a vector space, and we have a linear functional by \[
f \mapsto \int_0^1 f(x)dx.
\] But what about the reverse? Given a linear functional, is it
guaranteed that we have a measure to establish the integral? The R-M-K
theorem answers this question affirmatively. The functional to be
discussed is *positive*, which means that if \(\Lambda\) is positive and \(f(X) \subset [0,\infty)\), then \(\Lambda{f} \in [0,\infty)\).

Let \(X\) be a locally compact Hausdorff space, and let \(\Lambda\) be a positive linear functional on \(C_c(X)\). Then there exists a \(\sigma\)-algebra \(\mathfrak{M}\) on \(X\) which contains all Borel sets in \(X\), and there exists a unique positive measure \(\mu\) on \(\mathfrak{M}\) which represents \(\Lambda\) in the sense that \[ \Lambda{f}=\int_X fd\mu \] for all \(f \in C_c(X)\).

For the measure \(\mu\) and the \(\sigma\)-algebra \(\mathfrak{M}\), we have four assertions:

- \(\mu(K)<\infty\) for every compact set \(K \subset X\).
- For every \(E \in \mathfrak{M}\), we have
\[ \mu(E)=\{\mu(V):E \subset V, V\text{ open}\}. \]

- For every open set \(E\) and every \(E \in \mathfrak{M}\), we have
\[ \mu(E)=\sup\{\mu(K):K \subset E, K\text{ compact}\}. \]

- If \(E \in \mathfrak{M}\), \(A \subset E\), and \(\mu(E)=0\), then \(A \in \mathfrak{M}\).

**Remarks before proof.** It would be great if we can
establish the Lebesgue measure \(m\) by
putting \(X=\mathbb{R}^n\). But we need
a little more extra work to get this result naturally. If 2 is
satisfied, we say \(\mu\) is
*outer* regular, and *inner* regular for 3. If both hold,
we say \(\mu\) is *regular*. The
partition of unity and Urysohn's lemma will be heavily used in the proof
of the main theorem, so make sure you have no problem with it. It can
also be extended to complex space, but that requires much non-trivial
work.

The proof is rather long so we will split it into several steps. I will try my best to make every line clear enough.

For every open set \(V \in X\), define \[ \mu(V)=\sup\{\Lambda{f}:f \prec V\}. \]

If \(V_1 \subset V_2\) and both are
open, we claim that \(\mu(V_1) \leq
\mu(V_2)\). For \(f \prec V_1\),
since \(\operatorname{supp}f \subset V_1
\subset V_2\), we see \(f \prec
V_2\). But we are able to find some \(g
\prec V_2\) such that \(g \geq
f\), or more precisely, \(\operatorname{supp}(g) \supset
\operatorname{supp}(f)\). By taking another look at the proof of
Urysohn's lemma for locally compact Hausdorff space, we see there is an
open set G with compact closure such that \[
\operatorname{supp}(f) \subset G \subset \overline{G} \subset V_2.
\] By Urysohn's lemma to the pair \((\overline{G},V_2)\), we see there exists a
function \(g \in C_c(X)\) such that
\[
\overline{G} \prec g \prec V_2.
\] Therefore \[
\operatorname{supp}(f) \subset \overline{G} \subset
\operatorname{supp}(g).
\] Thus for any \(f \prec V_1\)
and \(g \prec V_2\), we have \(\Lambda{g} \geq \Lambda{f}\) (monotonic)
since \(\Lambda{g}-\Lambda{f}=\Lambda{(g-f)}\geq
0\). By taking the supremum over \(f\) and \(g\), we see \[
\mu(V_1) \leq \mu(V_2).
\] The 'monotonic' property of such \(\mu\) enables us to *define* \(\mu(E)\) for all \(E \subset X\) by \[
\mu(E)=\inf \{\mu(V):E \subset V, V\text{ open}\}.
\] The definition above is trivial to valid for open sets.
Sometimes people say \(\mu\) is the
outer measure. We will discuss other kind of sets thoroughly in the
following steps. Warning: we are not saying that \(\mathfrak{M} = 2^X\). The crucial property
of \(\mu\), namely countable
additivity, will be proved only on a certain \(\sigma\)-algebra.

It follows from the definition of \(\mu\) that if \(E_1 \subset E_2\), then \(\mu(E_1) \leq \mu(E_2)\).

Let \(\mathfrak{M}_F\) be the class of all \(E \subset X\) which satisfy the two following conditions:

\(\mu(E) <\infty\).

'Inner regular': \[ \mu(E)=\sup\{\mu(K):K \subset E, K\text{ compact}\}. \]

One may say here \(\mu\) is the 'inner measure'. Finally, let \(\mathfrak{M}\) be the class of all \(E \subset X\) such that for every compact \(K\), we have \(E \cap K \in \mathfrak{M}_F\). We shall show that \(\mathfrak{M}\) is the desired \(\sigma\)-algebra.

**Remarks of Step 0.** So far, we have only proved that
\(\mu(E) \geq 0\) for all \(E {\color\red{\subset}}X\). What about the
countable additivity? It's clear that \(\mathfrak{M}_F\) and \(\mathfrak{M}\) has some strong relation. We
need to get a clearer view of it. Also, if we restrict \(\mu\) to \(\mathfrak{M}_F\), we restrict ourself to
finite numbers. In fact, we will show finally \(\mathfrak{M}_F \subset \mathfrak{M}\).

If \(K\) is compact, then \(K \in \mathfrak{M}_F\), and \[ \mu(K)=\inf\{\Lambda{f}:K \prec f\}<\infty \]

Define \(V_\alpha=f^{-1}(\alpha,1]\) for \(K \prec f\) and \(0 < \alpha < 1\). Since \(f(x)=1\) for all \(x \in K\), we have \(K \subset V_{\alpha}\). Therefore by definition of \(\mu\) for all \(E \subset X\), we have \[ \mu(K) \leq \mu(V_\alpha)=\sup\{\Lambda{g}:g \prec V_{\alpha}\} < \frac{1}{\alpha}\Lambda{f}. \] Note that \(f \geq \alpha{g}\) whenever \(g \prec V_{\alpha}\) since \(\alpha{g} \leq \alpha < f\). Since \(\mu(K)\) is an lower bound of \(\frac{1}{\alpha}\Lambda{f}\) with \(0<\alpha<1\), we see \[ \mu(K) \leq \inf_{\alpha \in (0,1)}\{\frac{1}{\alpha}\Lambda{f}\}=\Lambda{f}. \] Since \(f(X) \in [0,1]\), we have \(\Lambda{f}\) to be finite. Namely \(\mu(K) <\infty\). Since \(K\) itself is compact, we see \(K \in \mathfrak{M}_F\).

To prove the identity, note that there exists some \(V \supset K\) such that \(\mu(V)<\mu(K)+\varepsilon\) for some \(\varepsilon>0\). By Urysohn's lemma, there exists some \(h \in C_c(X)\) such that \(K \prec h \prec V\). Therefore \[ \Lambda{h} \leq \mu(V) < \mu(K)+\varepsilon \] Therefore \(\mu(K)\) is the infimum of \(\Lambda{h}\) with \(K \prec h\).

**Remarks of Step 1.** We have just proved assertion 1
of the property of \(\mu\). The hardest
part of this proof is the inequality \[
\mu(V)<\mu(K)+\varepsilon.
\] But this is merely the \(\varepsilon\)-definition of \(\inf\). Note that \(\mu(K)\) is the infimum of \(\mu(V)\) with \(V
\supset K\). For any \(\varepsilon>0\), there exists some open
\(V\) for what? Under certain
conditions, this definition is much easier to use. Now we will examine
the relation between \(\mathfrak{M}_F\)
and \(\tau_X\), namely the topology of
\(X\).

\(\mathfrak{M}_F\) contains every open set \(V\) with \(\mu(V)<\infty\).

It suffices to show that for open set \(V\), we have \[ \mu(V)=\sup\{\mu(K):K \subset E, K\text{ compact}\}. \] For \(0<\varepsilon<\mu(V)\), we see there exists an \(f \prec V\) such that \(\Lambda{f}>\mu(V)-\varepsilon\). If \(W\) is any open set which contains \(K= \operatorname{supp}(f)\), then \(f \prec W\), and therefore \(\Lambda{f} \leq \mu(W)\). Again by definition of \(\mu(K)\), we see \[ \Lambda{f}\leq\mu(K). \] Therefore \[ \mu(V)-\varepsilon<\Lambda{f}\leq\mu(K)\leq\mu(V). \] This is exactly the definition of \(\sup\). The identity is proved.

**Remarks of Step 2.** It's important to that this
identity can only be satisfied by open sets and sets \(E\) with \(\mu(E)<\infty\), the latter of which
will be proved in the following steps. This is the *flaw* of this
theorem. With these preparations however, we are able to show the
countable additivity of \(\mu\) on
\(\mathfrak{M}_F\).

If \(E_1,E_2,E_3,\cdots\) are arbitrary subsets of \(X\), then \[ \mu\left(\bigcup_{k=1}^{\infty}E_k\right) \leq \sum_{k=1}^{\infty}\mu(E_k) \]

First we show this holds for finitely many open sets. This is tantamount to show that \[ \mu(V_1 \cup V_2)\leq \mu(V_1)+\mu(V_2) \] if \(V_1\) and \(V_2\) are open. Pick \(g \prec V_1 \cup V_2\). This is possible due to Urysohn's lemma. By corollary 1.3, there is a partition of unity \((h_1,h_2)\) subordinate to \((V_1,V_2)\) in the sense of corollary 1.3. Therefore, \[ \begin{aligned} \Lambda(g)&=\Lambda((h_1+h_2)g) \\ &=\Lambda(h_1g)+\Lambda(h_2g) \\ &\leq\mu(V_1)+\mu(V_2). \end{aligned} \] Notice that \(h_1g \prec V_1\) and \(h_2g \prec V_2\). By taking the supremum, we have \[ \mu(V_1 \cup V_2)\leq \mu(V_1)+\mu(V_2). \]

Now we back to arbitrary subsets of \(X\). If \(\mu(E_i)=\infty\) for some \(i\), then there is nothing to prove. Therefore we shall assume that \(\mu(E_i)<\infty\) for all \(i\). By definition of \(\mu(E_i)\), we see there are open sets \(V_i \supset E_i\) such that \[ \mu(V_i)<\mu(E_i)+\frac{\varepsilon}{2^i}. \] Put \(V=\bigcup_{i=1}^{\infty}V_i\), and choose \(f \prec V_i\). Since \(f \in C_c(X)\), there is a finite collection of \(V_i\) that covers the support of \(f\). Therefore without loss of generality, we may say that \[ f \prec V_1 \cup V_2 \cup \cdots \cup V_n \] for some \(n\). We therefore obtain \[ \begin{aligned} \Lambda{f} &\leq \mu(V_1 \cup V_2 \cup \cdots \cup V_n) \\ &\leq \mu(V_1)+\mu(V_2)+\cdots+\mu(V_n) \\ &\leq \sum_{i=1}^{n}\left(\mu(E_i)+\frac{\varepsilon}{2^i}\right) \\ &\leq \sum_{i=1}^{\infty}\mu(E_i)+\varepsilon, \end{aligned} \] for all \(f \prec V\). Since \(\bigcup E_i \subset V\), we have \(\mu(\bigcup E_i) \leq \mu(V)\). Therefore \[ \mu(\bigcup_{i=1}^{\infty}E_i)\leq\mu(V)=\sup\{\Lambda{f}\}\leq\sum_{i=1}^{\infty}\mu(E_i)+\varepsilon. \] Since \(\varepsilon\) is arbitrary, the inequality is proved.

**Remarks of Step 3.** Again, we are using the \(\varepsilon\)-definition of \(\inf\). One may say this step showed the
subaddtivity of the outer measure. Also note the geometric series by
\(\sum_{k=1}^{\infty}\frac{\varepsilon}{2^k}=\varepsilon\).

Suppose \(E=\bigcup_{i=1}^{\infty}E_i\), where \(E_1,E_2,\cdots\) are pairwise disjoint members of \(\mathfrak{M}_F\), then \[ \mu(E)=\sum_{i=1}^{\infty}\mu(E_i). \] If \(\mu(E)<\infty\), we also have \(E \in \mathfrak{M}_F\).

As a dual to Step 3, we firstly show this holds for finitely many compact sets. As proved in Step 1, compact sets are in \(\mathfrak{M}_F\). Suppose now \(K_1\) and \(K_2\) are disjoint compact sets. We want to show that \[ \mu(K_1 \cup K_2)=\mu(K_1)+\mu(K_2). \] Note that compact sets in a Hausdorff space is closed. Therefore we are able to apply Urysohn's lemma to the pair \((K_1,K_2^c)\). That said, there exists a \(f \in C_c(X)\) such that \[ K_1 \prec f \prec K_2^c. \] In other words, \(f(x)=1\) for all \(x \in K_1\) and \(f(x)=0\) for all \(x \in K_2\), since \(\operatorname{supp}(f) \cap K_2 = \varnothing\). By Step 1, since \(K_1 \cup K_2\) is compact, there exists some \(g \in C_c(X)\) such that \[ K_1 \cup K_2 \prec g \quad \text{and} \quad \Lambda(g) < \mu(K_1 \cup K_2)+\varepsilon. \] Now things become tricky. We are able to write \(g\) by \[ g=fg+(1-f)g. \] But \(K_1 \prec fg\) and \(K_2 \prec (1-f)g\) by the properties of \(f\) and \(g\). Also since \(\Lambda\) is linear, we have \[ \mu(K_1)+\mu(K_2) \leq \Lambda(fg)+\Lambda((1-f)g)=\Lambda(g) < \mu(K_1 \cup K_2)+\varepsilon. \] Therefore we have \[ \mu(K_1)+\mu(K_2) \leq \mu(K_1 \cup K_2). \] On the other hand, by Step 3, we have \[ \mu(K_1 \cup K_2) \leq \mu(K_1)+\mu(K_2). \] Therefore they must equal.

If \(\mu(E)=\infty\), there is nothing to prove. So now we should assume that \(\mu(E)<\infty\). Since \(E_i \in \mathfrak{M}_F\), there are compact sets \(K_i \subset E_i\) with \[ \mu(K_i) > \mu(E_i)-\frac{\varepsilon}{2^i}. \] Putting \(H_n=K_1 \cup K_2 \cup \cdots \cup K_n\), we see \(E \supset H_n\) and \[ \mu(E) \geq \mu(H_n)=\sum_{i=1}^{n}\mu(H_i)>\sum_{i=1}^{n}\mu(E_i)-\varepsilon. \] This inequality holds for all \(n\) and \(\varepsilon\), therefore \[ \mu(E) \geq \sum_{i=1}^{\infty}\mu(E_i). \] Therefore by Step 3, the identity holds.

Finally we shall show that \(E \in
\mathfrak{M}_F\) if \(\mu(E)
<\infty\). To make it more understandable, we will use
elementary calculus notation. If we write \(\mu(E)=x\) and \(x_n=\sum_{i=1}^{n}\mu(E_i)\), we see \[
\lim_{n \to \infty}x_n=x.
\] Therefore, for any \(\varepsilon>0\), there exists some \(N \in \mathbb{N}\) such that \[
x-x_N<\varepsilon.
\] This is tantamount to \[
\mu(E)<\sum_{i=1}^{N}\mu(E_i)+\varepsilon.
\] But by definition of the *compact* set \(H_N\) above, we see \[
\mu(E)<{\color\red{\sum_{i=1}^{N}\mu(E_i)}}+\varepsilon<{\color\red
{\mu(H_N)+\varepsilon}}+\varepsilon=\mu(H_N)+2\varepsilon.
\] Hence \(E\) satisfies the
requirements of \(\mathfrak{M}_F\),
thus an element of it.

**Remarks of Step 4.** You should realize that we are
heavily using the \(\varepsilon\)-definition of \(\sup\) and \(\inf\). As you may guess, \(\mathfrak{M}_F\) should be a subset of
\(\mathfrak{M}\) though we don't know
whether it is a \(\sigma\)-algebra or
not. In other words, we hope that the countable additivity of \(\mu\) holds on a \(\sigma\)-algebra that is *properly
extended* from \(\mathfrak{M}_F\).
However it's still difficult to show that \(\mathfrak{M}\) is a \(\sigma\)-algebra. We need more properties
of \(\mathfrak{M}_F\) to go on.

If \(E \in \mathfrak{M}_F\) and \(\varepsilon>0\), there is a compact \(K\) and an open \(V\) such that \(K \subset E \subset V\) and \(\mu(V-K)<\varepsilon\).

There are two ways to write \(\mu(E)\), namely \[ \mu(E)=\sup\{\mu(K):K \subset E\} \quad \text{and} \quad \mu(E)=\inf\{\mu(V):V\supset E\} \] where \(K\) is compact and \(V\) is open. Therefore there exists some \(K\) and \(V\) such that \[ \mu(V)-\frac{\varepsilon}{2}<\mu(E)<\mu(K)+\frac{\varepsilon}{2}. \] Since \(V-K\) is open, and \(\mu(V-K)<\infty\), we have \(V-K \in \mathfrak{M}_F\). By Step 4, we have \[ \mu(K)+\mu(V-K)=\mu(V) <\mu(K)+\varepsilon. \] Therefore \(\mu(V-K)<\varepsilon\) as proved.

**Remarks of Step 5.** You should be familiar with the
\(\varepsilon\)-definitions of \(\sup\) and \(\inf\) now. Since \(V-K =V\cap K^c \subset V\), we have \(\mu(V-K)\leq\mu(V)<\mu(E)+\frac{\varepsilon}{2}<\infty\).

If \(A,B \in \mathfrak{M}_F\), then \(A-B,A\cup B\) and \(A \cap B\) are elements of \(\mathfrak{M}_F\).

This shows that \(\mathfrak{M}_F\) is closed under union, intersection and relative complement. In fact, we merely need to prove \(A-B \in \mathfrak{M}_F\), since \(A \cup B=(A-B) \cup B\) and \(A\cap B = A-(A-B)\).

By Step 5, for \(\varepsilon>0\), there are sets \(K_A\), \(K_B\), \(V_A\), \(V_B\) such that \(K_A \subset A \subset V_A\), \(K_B \subset B \subset V_B\), and for \(A-B\) we have \[ A-B \subset V_A-K_B \subset (V_A-K_A) \cup (K_A-V_B) \cup (V_B-K_B). \] With an application of Step 3 and 5, we have \[ \mu(A-B) \leq \mu(V_A-K_A)+\mu(K_A-V_B)+\mu(V_B-K_B)< \varepsilon+\mu(K_A-V_B)+\varepsilon. \] Since \(K_A-V_B\) is a closed subset of \(K_A\), we see \(K_A-V_B\) is compact as well (a closed subset of a compact set is compact). But \(K_A-V_B \subset A-B\), and \(\mu(A-B) <\mu(K_A-V_B)+2\varepsilon\), we see \(A-B\) meet the requirement of \(\mathfrak{M}_F\) (, the fact that \(\mu(A-B)<\infty\) is trivial since \(\mu(A-B)<\mu(A)\)).

Since \(A-B\) and \(B\) are pairwise disjoint members of \(\mathfrak{M}_F\), we see \[ \mu(A \cup B)=\mu(A-B)+\mu(B)<\infty. \] Thus \(A \cup B \in \mathfrak{M}_F\). Since \(A,A-B \in \mathfrak{M}_F\), we see \(A \cap B = A-(A-B) \in \mathfrak{M}_F\).

**Remarks of Step 6.** In this step, we demonstrated
several ways to express a set, all of which end up with a huge
simplification. Now we are able to show that \(\mathfrak{M}_F\) is a subset of \(\mathfrak{M}\).

There is a precise relation between \(\mathfrak{M}\) and \(\mathfrak{M}_F\) given by \[ \mathfrak{M}_F=\{E \in \mathfrak{M}:\mu(E)<\infty\} \subset \mathfrak{M}. \]

If \(E \in \mathfrak{M}_F\), we shall show that \(E \in \mathfrak{M}\). For compact \(K\in\mathfrak{M}_F\) (Step 1), by Step 6, we see \(K \cap E \in \mathfrak{M}_F\), therefore \(E \in \mathfrak{M}\).

If \(E \in \mathfrak{M}\) with \(\mu(E)<\infty\) however, we need to show that \(E \in \mathfrak{M}_F\). By definition of \(\mu\), for \(\varepsilon>0\), there is an open \(V\) such that \[ \mu(V)<\mu(E)+\varepsilon<\infty. \] Therefore \(V \in \mathfrak{M}_F\). By Step 5, there is a compact set \(K\) such that \(\mu(V-K)<\varepsilon\) (the open set containing \(V\) should be \(V\) itself). Since \(E \cap K \in \mathfrak{M}_F\), there exists a compact set \(H \subset E \cap K\) with \[ \mu(E \cap K)<\mu(H)+\varepsilon. \] Since \(E \subset (E \cap K) \cup (V-K)\), it follows from Step 1 that \[ \mu(E) \leq {\color\red{\mu(E\cap K)}}+\mu(V-K)<{\color\red{\mu(H)+\varepsilon}}+\varepsilon=\mu(H)+2\varepsilon. \] Therefore \(E \in \mathfrak{M}_F\).

**Remarks of Step 7.** Several tricks in the preceding
steps are used here. Now we are pretty close to the fact that \((X,\mathfrak{M},\mu)\) is a measure space.
Note that for \(E \in
\mathfrak{M}-\mathfrak{M}_F\), we have \(\mu(E)=\infty\), but we have already proved
the countable additivity for \(\mathfrak{M}_F\). Is it 'almost trivial'
for \(\mathfrak{M}\)? Before that, we
need to show that \(\mathfrak{M}\) is a
\(\sigma\)-algebra. Note that assertion
3 of \(\mu\) has been proved.

We will validate the definition of \(\sigma\)-algebra one by one.

\(X \in \mathfrak{M}\).

For any compact \(K \subset X\), we have \(K \cap X=K\). But as proved in Step 1, \(K \in \mathfrak{M}_F\), therefore \(X \in \mathfrak{M}\).

If \(A \in \mathfrak{M}\), then \(A^c \in\mathfrak{M}\).

If \(A \in \mathfrak{M}\), then \(A \cap K \in \mathfrak{M}_F\). But \[ K-(A \cap K)=K \cap(A^c \cup K^c)=K\cap A^c \cup \varnothing=K \cap A^c. \] By Step 1 and Step 6, we see \(K \cap A^c \in \mathfrak{M}_F\), thus \(A^c \in \mathfrak{M}\).

If \(A_n \in \mathfrak{M}\) for all \(n \in \mathbb{N}\), then \(A=\bigcup_{n=1}^{\infty}A_n \in \mathfrak{M}\).

We assign an auxiliary sequence of sets inductively. For \(n=1\), we write \(B_1=A_1 \cap K\) where \(K\) is compact. Then \(B_1 \in \mathfrak{M}_F\). For \(n \geq 2\), we write \[ B_n=(A_n \cap K)-(B_1 \cup \cdots\cup B_{n-1}). \] Since \(A_n \cap K \in \mathfrak{M}_F\), \(B_1,B_2,\cdots,B_{n-1} \in \mathfrak{M}_F\), by Step 6, \(B_n \in \mathfrak{M}_F\). Also \(B_n\) is pairwise disjoint.

Another set-theoretic manipulation shows that \[ \begin{aligned} A \cap K&=K \cap\left(\bigcup_{n=1}^{\infty}A_n\right) \\ &=\bigcup_{n=1}^{\infty}(K \cap A_n) \\ &=\bigcup_{n=1}^{\infty}B_n \cup(B_1 \cup \cdots\cup B_{n-1}) \\ &=\bigcup_{n=1}^{\infty}B_n. \end{aligned} \] Now we are able to evaluate \(\mu(A \cap K)\) by Step 4. \[ \begin{aligned} \mu(A \cap K)&=\sum_{n=1}^{\infty}\mu(B_n) \\ &= \lim_{n \to \infty}(A_n \cap K) <\infty. \end{aligned} \] Therefore \(A \cap K \in \mathfrak{M}_F\), which implies that \(A \in \mathfrak{M}\).

\(\mathfrak{M}\) contains all Borel sets.

Indeed, it suffices to prove that \(\mathfrak{M}\) contains all open sets and/or closed sets. We'll show two different paths. Let \(K\) be a compact set.

- If \(C\) is closed, then \(C \cap K\) is compact, therefore \(C\) is an element of \(\mathfrak{M}_F\). (By Step 2.)
- If \(D\) is open, then \(D \cap K \subset K\). Therefore \(\mu(D \cap K) \leq \mu(K)<\infty\), which shows that \(D\) is an element of \(\mathfrak{M}_F\) (step 7).

Therefore by 1 or 2, \(\mathfrak{M}\) contains all Borel sets.

Again, we will verify all properties of \(\mu\) one by one.

\(\mu(E) \geq 0\) for all \(E \in \mathfrak{M}\).

This follows immediately from the definition of \(\mu\), since \(\Lambda\) is positive and \(0 \leq f \leq 1\).

\(\mu\) is countably additive.

If \(A_1,A_2,\cdots\) form a disjoint countable collection of members of \(\mathfrak{M}\), we need to show that \[ \mu\left(\bigcup_{n=1}^{\infty}A_n\right)=\sum_{n=1}^{\infty}\mu(A_n). \] If \(A_n \in \mathfrak{M}_F\) for all \(n\), then this is merely what we have just proved in Step 4. If \(A_j \in \mathfrak{M}-\mathfrak{M}_F\) however, we have \(\mu(A_j)=\infty\). So \(\sum_n\mu(A_n)=\infty\). For \(\mu(\cup_n A_n)\), notice that \(\cup_n A_n \supset A_j\), we have \(\mu(\cup_n A_n) \geq \mu(A_j)=\infty\). The identity is now proved.

So far assertion 1-3 have been proved. But the final assertion has not been proved explicitly. We do that since this property will be used when discussing the Lebesgue measure \(m\). In fact, this will show that \((X,\mathfrak{M},\mu)\) is a complete measure space.

If \(E \in \mathfrak{M}\), \(A \subset E\), and \(\mu(E)=0\), then \(A \in \mathfrak{M}\).

It suffices to show that \(A \in \mathfrak{M}_F\). By definition, \(\mu(A)=0\) as well. If \(K \subset A\), where \(K\) is compact, then \(\mu(K)=\mu(A)=0\). Therefore \(0\) is the supremum of \(\mu(K)\). It follows that \(A \in \mathfrak{M}_F \subset \mathfrak{M}\).

For every \(f \in C_c(X)\), \(\Lambda{f}=\int_X fd\mu\).

This is the absolute main result of the theorem. It suffices to prove the inequality \[ \Lambda f \leq \int_X fd\mu \] for all \(f \in C_c(X)\). What about the other side? By the linearity of \(\Lambda\) and \(\int_X \cdot d\mu\), once inequality above proved, we have \[ \Lambda(-f)=-\Lambda{f}\leq\int_{X}-fd\mu=-\int_Xfd\mu. \] Therefore \[ \Lambda{f} \geq \int_X fd\mu \] holds as well, and this establish the equality.

Notice that since \(K=\operatorname{supp}(f)\) is compact, we see the range of \(f\) has to be compact. Namely we may assume that \([a,b]\) contains the range of \(f\). For \(\varepsilon>0\), we are able to pick a partition around \([a,b]\) such that \(y_n - y_{n-1}<\varepsilon\) and \[ y_0 < a < y_1<\cdots<y_n=b. \] Put \[ E_i=\{x:y_{i-1}< f(x) \leq y_i\}\cap K. \] Since \(f\) is continuous, \(f\) is Borel measurable. The sets \(E_i\) are trivially pairwise disjoint Borel sets. Again, there are open sets \(V_i \supset E_i\) such that \[ \mu(V_i) < \mu(E_i)+\frac{\varepsilon}{n} \] for \(i=1,2,\cdots,n\), and such that \(f(x)<y_i + \varepsilon\) for all \(x \in V_i\). Notice that \((V_i)\) covers \(K\), therefore by the partition of unity, there are a sequence of functions \((h_i)\) such that \(h_i \prec V_i\) for all \(i\) and \(\sum h_i=1\) on \(K\). By Step 1 and the fact that \(f=\sum_i h_i\), we see \[ \mu(K) \leq \Lambda(\sum_i h_i)=\sum_i \Lambda{h_i}. \] By the way we picked \(V_i\), we see \(h_if \leq (y_i+\varepsilon)h_i\). We have the following inequality: \[ \begin{aligned} \Lambda{f} &= \sum_{i=1}^{n}\Lambda(h_if) \leq\sum_{i=1}^{n}(y_i+\varepsilon)\Lambda{h_i} \\ &= \sum_{i=1}^{n}\left(|a|-|a|+y_i+\varepsilon\right)\Lambda{h_i} \\ &=\sum_{i=1}^{n}(|a|+y_i+\varepsilon)\Lambda{h_i}-|a|\sum_{i=1}^{n}\Lambda{h_i}. \end{aligned} \] Since \(h_i \prec V_i\), we have \(\mu(E_i)+\frac{\varepsilon}{n}>\mu(V_i) \geq \Lambda{h_i}\). And we already get \(\sum_i \Lambda{h_i} \geq \mu(K)\). If we put them into the inequality above, we get \[ \begin{aligned} \Lambda{f} &\leq \sum_{i=1}^{n}(|a|+y_i+\varepsilon)\Lambda{h_i}-|a|\sum_{i=1}^{n}\Lambda{h_i} \\ &\leq \sum_{i=1}^{n}(|a|+y_i+\varepsilon){\color\red{(\mu(E_i)+\frac{\varepsilon}{n})}}-|a|\color\red{\mu(K)}. \end{aligned} \] Observe that \(\cup_i E_i=K\), by Step 9 we have \(\sum_{i}\mu(E_i)=\mu(K)\). A slight manipulation shows that \[ \begin{aligned} \sum_{i=1}^{n}(|a|+y_i+\varepsilon)\mu(E_i)-|a|\mu(K)&=|a|\sum_{i=1}^{n}\mu(E_i)-|a|\mu(K)+\sum_{i=1}^{n}(y_i+\varepsilon)\mu(E_i) \\ &=\sum_{i=1}^{n}(y_i-\varepsilon)\mu(E_i)+2\varepsilon\mu(K). \end{aligned} \] Therefore for \(\Lambda f\) we get \[ \begin{aligned} \Lambda{f} &\leq\sum_{i=1}^{n}(|a|+y_i+\varepsilon)(\mu(E_i)+\frac{\varepsilon}{n})-|a|\mu(K) \\ &=\sum_{i=1}^{n}(y_i-\varepsilon)\mu(E_i)+2\varepsilon\mu(K)+\frac{\varepsilon}{n}\sum_{i=1}^n(|a|+y_i+\varepsilon). \end{aligned} \] Now here comes the trickiest part of the whole blog post. By definition of \(E_i\), we see \(f(x) > y_{i-1}>y_{i}-\varepsilon\) for \(x \in E_i\). Therefore we get simple function \(s_n\) by \[ s_n=\sum_{i=1}^{n}(y_i-\varepsilon)\chi_{E_i}. \] If we evaluate the Lebesgue integral of \(f\) with respect to \(\mu\), we see \[ \int_X s_nd\mu={\color\red{\sum_{i=1}^{n}(y_i-\varepsilon)\mu(E_i)}} \leq {\color\red{\int_X fd\mu}}. \] For \(2\varepsilon\mu(K)\), things are simple since \(0\leq\mu(K)<\infty\). Therefore \(2\varepsilon\mu(K) \to 0\) as \(\varepsilon \to 0\). Now let's estimate the final part of the inequality. It's trivial that \(\frac{\varepsilon}{n}\sum_{i=1}^{n}(|a|+\varepsilon)=\varepsilon(\varepsilon+|a|)\). For \(y_i\), observe that \(y_i \leq b\) for all \(i\), therefore \(\frac{\varepsilon}{n}\sum_{i=1}^{n}y_i \leq \frac{\varepsilon}{n}nb=\varepsilon b\). Thus \[ {\color\green{\frac{\varepsilon}{n}\sum_{i=1}^{n}(|a|+y_i+\varepsilon)}} \color\black\leq {\color\green {\varepsilon(|a|+b+\varepsilon)}}\color\black{.} \] Notice that \(b+|a| \geq 0\) since \(b \geq a \geq -|a|\). Our estimation of \(\Lambda{f}\) is finally done: \[ \begin{aligned} \Lambda{f} &\leq{\color\red{\sum_{i=1}^{n}(y_i-\varepsilon)\mu(E_i)}}+2\varepsilon\mu(K)+{\color\green{\frac{\varepsilon}{n}\sum_{i=1}^n(|a|+y_i+\varepsilon)}} \\ &\leq{\color\red {\int_Xfd\mu}}+2\varepsilon\mu(K)+{\color\green{\varepsilon(|a|+b+\varepsilon)}} \\ &= \int_X fd\mu+\varepsilon(2\mu(K)+|a|+b+\varepsilon). \end{aligned} \] Since \(\varepsilon\) is arbitrary, we see \(\Lambda{f} \leq \int_X fd\mu\). The identity is proved.

If there are two measures \(\mu_1\) and \(\mu_2\) that satisfy assertion 1 to 4 and are correspond to \(\Lambda\), then \(\mu_1=\mu_2\).

In fact, according to assertion 2 and 3, \(\mu\) is determined by the values on compact subsets of \(X\). It suffices to show that

If \(K\) is a compact subset of \(X\), then \(\mu_1(K)=\mu_2(K)\).

Fix \(K\) compact and \(\varepsilon>0\). By Step 1, there exists an open \(V \supset K\) such that \(\mu_2(V)<\mu_2(K)+\varepsilon\). By Urysohn's lemma, there exists some \(f\) such that \(K \prec f \prec V\). Hence \[ \mu_1(K)=\int_X\chi_kd\mu \leq\int_X fd\mu=\Lambda{f}=\int_X fd\mu_2 \\ \leq \int_X \chi_V fd\mu_2=\mu_2(V)<\mu_2(V)+\varepsilon. \] Thus \(\mu_1(K) \leq \mu_2(K)\). If \(\mu_1\) and \(\mu_2\) are exchanged, we see \(\mu_2(K) \leq \mu_1(K)\). The uniqueness is proved.

Can we simply put \(X=\mathbb{R}^k\)
right now? The answer is no. Note that the outer regularity is for all
sets but inner is only for open sets and members of \(\mathfrak{M}_F\). But we expect the outer
and inner regularity to be 'symmetric'. There is an example showing that
*locally compact* is far from being enough to offer the
'symmetry'.

Define \(X=\mathbb{R}_1 \times \mathbb{R}_2\), where \(\mathbb{R}_1\) is the real line equipped with discrete metric \(d_1\), and \(\mathbb{R}_2\) is the real line equipped with euclidean metric \(d_2\). The metric of \(X\) is defined by \[ d_X((x_1,y_1),(x_2,y_2))=d_1(x_1,x_2)+d_2(x_1,x_2). \] The topology \(\tau_X\) induced by \(d_X\) is naturally Hausdorff and locally compact by considering the vertical segments. So what would happen to this weird locally compact Hausdorff space?

If \(f \in C_c(X)\), let \(x_1,x_2,\cdots,x_n\) be those values of \(x\) for which \(f(x,y) \neq 0\) for at least one \(y\). Since \(f\) has compact support, it is ensured that there are only finitely many \(x_i\)'s. We are able to define a positive linear functional by \[ \Lambda f=\sum_{i=1}^{n}\int_{-\infty}^{+\infty}f(x_i,y)dy=\int_X fd\mu, \] where \(\mu\) is the measure associated with \(\Lambda\) in the sense of R-M-K theorem. Let \[ E=\mathbb{R}_1 \times \{0\}. \] By squeezing the disjoint vertical segments around \((x_i,0)\), we see \(\mu(K)=0\) for all compact \(K \subset E\) but \(\mu(E)=\infty\).

This is in violent contrast to what we do expect. However, if \(X\) is required to be \(\sigma\)-compact (note that the space in this example is not), this kind of problems disappear neatly.

- Walter Rudin,
*Real and Complex Analysis* - Serge Lang,
*Fundamentals of Differential Geometry* - Joel W. Robbin,
*Partition of Unity* - Brian Conrad,
*Paracompactness and local compactness* - Raoul Bott & Loring W. Tu,
*Differential Forms in Algebraic Topology*

We are finally going to prove the open mapping theorem in \(F\)-space. In this version, only metric and completeness are required. Therefore it contains the Banach space version naturally.

(Theorem 0)Suppose we have the following conditions:

- \(X\) is a \(F\)-space,
- \(Y\) is a topological space,
- \(\Lambda: X \to Y\) is continuous and linear, and
- \(\Lambda(X)\) is of the second category in \(Y\).
Then \(\Lambda\) is an open mapping.

*Proof.* Let \(B\) be a
neighborhood of \(0\) in \(X\). Let \(d\) be an invariant metric on \(X\) that is compatible with the \(F\)-topology of \(X\). Define a sequence of balls by \[
B_n=\{x:d(x,0) < \frac{r}{2^n}\}
\] where \(r\) is picked in such
a way that \(B_0 \subset B\). To show
that \(\Lambda\) is an open mapping, we
need to prove that there exists some neighborhood \(W\) of \(0\) in \(Y\) such that \[
W \subset \Lambda(B).
\] To do this however, we need an auxiliary set. In fact, we will
show that there exists some \(W\) such
that \[
W \subset \overline{\Lambda(B_1)} \subset \Lambda(B).
\] We need to prove the inclusions one by one.

The first inclusion requires BCT. Since \(B_2 -B_2 \subset B_1\), and \(Y\) is a topological space, we get \[ \overline{\Lambda(B_2)}-\overline{\Lambda(B_2)} \subset \overline{\Lambda(B_2)-\Lambda(B_2)} \subset \overline{\Lambda(B_1)} \] Since \[ \Lambda(X)=\bigcup_{k=1}^{\infty}k\Lambda(B_2), \] according to BCT, at least one \(k\Lambda(B_2)\) is of the second category in \(Y\). But scalar multiplication \(y\mapsto ky\) is a homeomorphism of \(Y\) onto \(Y\), we see \(k\Lambda(B_2)\) is of the second category for all \(k\), especially for \(k=1\). Therefore \(\overline{\Lambda(B_2)}\) has nonempty interior, which implies that there exists some open neighborhood \(W\) of \(0\) in \(Y\) such that \(W \subset \overline{\Lambda(B_1)}\). By replacing the index, it's easy to see this holds for all \(n\). That is, for \(n \geq 1\), there exists some neighborhood \(W_n\) of \(0\) in \(Y\) such that \(W_n \subset \overline{\Lambda(B_n)}\).

The second inclusion requires the completeness of \(X\). Fix \(y_1 \in \overline{\Lambda(B_1)}\), we will show that \(y_1 \in \Lambda(B)\). Pick \(y_n\) inductively. Assume \(y_n\) has been chosen in \(\overline{\Lambda(B_n)}\). As stated before, there exists some neighborhood \(W_{n+1}\) of \(0\) in \(Y\) such that \(W_{n+1} \subset \overline{\Lambda(B_{n+1})}\). Hence \[ (y_n-W_{n+1}) \cap \Lambda(B_n) \neq \varnothing \] Therefore there exists some \(x_n \in B_n\) such that \[ \Lambda x_n = y_n - W_{n+1}. \] Put \(y_{n+1}=y_n-\Lambda x_n\), we see \(y_{n+1} \in W_{n+1} \subset \overline{\Lambda(B_{n+1})}\). Therefore we are able to pick \(y_n\) naturally for all \(n \geq 1\).

Since \(d(x_n,0)<\frac{r}{2^n}\) for all \(n \geq 0\), the sums \(z_n=\sum_{k=1}^{n}x_k\) converges to some \(z \in X\) since \(X\) is a \(F\)-space. Notice we also have \[ \begin{aligned} d(z,0)& \leq d(x_1,0)+d(x_2,0)+\cdots \\ & < \frac{r}{2}+\frac{r}{4}+\cdots \\ & = r \end{aligned} \] we have \(z \in B_0 \subset B\).

By the continuity of \(\Lambda\), we see \(\lim_{n \to \infty}y_n = 0\). Notice we also have \[ \sum_{k=1}^{n} \Lambda x_k = \sum_{k=1}^{n}(y_k-y_{k+1})=y_1-y_{n+1} \to y_1 \quad (n \to \infty), \] we see \(y_1 = \Lambda z \in \Lambda(B)\).

The whole theorem is now proved, that is, \(\Lambda\) is an open mapping. \(\square\)

You may think the following relation comes from nowhere: \[ (y_n - W_{n+1}) \cap \Lambda(B_{n}) \neq \varnothing. \] But it's not. We need to review some set-point topology definitions. Notice that \(y_n\) is a limit point of \(\Lambda(B_n)\), and \(y_n-W_{n+1}\) is a open neighborhood of \(y_n\). If \((y_n - W_{n+1}) \cap \Lambda(B_{n})\) is empty, then \(y_n\) cannot be a limit point.

The geometric series by \[ \frac{\varepsilon}{2}+\frac{\varepsilon}{4}+\cdots+\frac{\varepsilon}{2^n}+\cdots=\varepsilon \] is widely used when sum is taken into account. It is a good idea to keep this technique in mind.

The formal proof will not be put down here, but they are quite easy to be done.

(Corollary 0)\(\Lambda(X)=Y\).

This is an immediate consequence of the fact that \(\Lambda\) is open. Since \(Y\) is open, \(\Lambda(X)\) is an open subspace of \(Y\). But the only open subspace of \(Y\) is \(Y\) itself.

(Corollary 1)\(Y\) is a \(F\)-space as well.

If you have already see the commutative diagram by quotient space (put \(N=\ker\Lambda\)), you know that the induced map \(f\) is open and continuous. By treating topological spaces as groups, by corollary 0 and the first isomorphism theorem, we have \[ X/\ker\Lambda \simeq \Lambda(X)=Y. \] Therefore \(f\) is a isomorphism; hence one-to-one. Therefore \(f\) is a homeomorphism as well. In this post we showed that \(X/\ker{\Lambda}\) is a \(F\)-space, therefore \(Y\) has to be a \(F\)-space as well. (We are using the fact that \(\ker{\Lambda}\) is a closed set. But why closed?)

(Corollary 2)If \(\Lambda\) is a continuous linear mapping of an \(F\)-space \(X\) onto a \(F\)-space \(Y\), then \(\Lambda\) is open.

This is a direct application of BCT and open mapping theorem. Notice that \(Y\) is now of the second category.

(Corollary 3)If the linear map \(\Lambda\) in Corollary 2 is injective, then \(\Lambda^{-1}:Y \to X\) is continuous.

This comes from corollary 2 directly since \(\Lambda\) is open.

(Corollary 4)If \(X\) and \(Y\) are Banach spaces, and if \(\Lambda: X \to Y\) is a continuous linear bijective map, then there exist positive real numbers \(a\) and \(b\) such that \[ a \lVert x \rVert \leq \lVert \Lambda{x} \rVert \leq b\rVert x \rVert \] for every \(x \in X\).

This comes from corollary 3 directly since both \(\Lambda\) and \(\Lambda^{-1}\) are bounded as they are continuous.

(Corollary 5)If \(\tau_1 \subset \tau_2\) are vector topologies on a vector space \(X\) and if both \((X,\tau_1)\) and \((X,\tau_2)\) are \(F\)-spaces, then \(\tau_1 = \tau_2\).

This is obtained by applying corollary 3 to the identity mapping \(\iota:(X,\tau_2) \to (X,\tau_1)\).

(Corollary 6)If \(\lVert \cdot \rVert_1\) and \(\lVert \cdot \rVert_2\) are two norms in a vector space \(X\) such that

- \(\lVert\cdot\rVert_1 \leq K\lVert\cdot\rVert_2\).
- \((X,\lVert\cdot\rVert_1)\) and \((X,\lVert\cdot\rVert_2)\) are Banach
Then \(\lVert\cdot\rVert_1\) and \(\lVert\cdot\rVert_2\) are equivalent.

This is merely a more restrictive version of corollary 5.

Since there is no strong reason to write more posts on this topic, i.e. the three fundamental theorems of linear functional analysis, I think it's time to make a list of the series. It's been around half a year.

- The Big Three Pt. 1 - Baire Category Theorem Explained
- The Big Three Pt. 2 - The Banach-Steinhaus Theorem
- The Big Three Pt. 3 - The Open Mapping Theorem (Banach Space)
- The Big Three Pt. 4 - The Open Mapping Theorem (F-Space)
- The Big Three Pt. 5 - The Hahn-Banach Theorem (Dominated Extension)
- The Big Three Pt. 6 - Closed Graph Theorem with Applications

We are going to show the completeness of \(X/N\) where \(X\) is a TVS and \(N\) a closed subspace. Alongside, a bunch of useful analysis tricks will be demonstrated (and that's why you may find this blog post a little tedious.). But what's more important, the theorem proved here will be used in the future.

To make it clear, we should give a formal definition of \(F\)-space.

A topological space \(X\) is an \(F\)-space if its topology \(\tau\) is induced by a complete invariant metric \(d\).

A metric \(d\) on a vector space \(X\) will be called invariant if for all \(x,y,z \in X\), we have \[ d(x+z,y+z)=d(x,y). \] By complete we mean every Cauchy sequence of \((X,d)\) converges.

The metric can be inherited to the quotient space naturally (we will use this fact latter), that is

If \(X\) is a \(F\)-space, \(N\) is a closed subspace of a topological vector space \(X\), then \(X/N\) is still a \(F\)-space.

Suppose \(d\) is a complete invariant metric compatible with \(\tau_X\). The metric on \(X/N\) is defined by \[ \boxed{\rho(\pi(x),\pi(y))=\inf_{z \in N}d(x-y,z)} \] ### \(\rho\) is a metric

*Proof.* First, if \(\pi(x)=\pi(y)\), that is, \(x-y \in N\), we see \[
\rho(\pi(x),\pi(y))=\inf_{z \in N}d(x-y,z)=d(x-y,x-y)=0.
\] If \(\pi(x) \neq \pi(y)\)
however, we shall show that \(\rho(\pi(x),\pi(y))>0\). In this case,
we have \(x-y \notin N\). Since \(N\) is closed, \(N^c\) is open, and \(x-y\) is an interior point of \(X-N\). Therefore there exists an open ball
\(B_r(x-y)\) centered at \(x-y\) with radius \(r>0\) such that \(B_r(x-y) \cap N = \varnothing\). Notice we
have \(d(x-y,z)>r\) since otherwise
\(z \in B_r(x-y)\). By putting \[
r_0=\sup\{r:B_r(x-y) \cap N = \varnothing\},
\] we see \(d(x-y,z) \geq r_0\)
for all \(z \in N\) and indeed \(r_0=\inf_{z \in N}d(x-y,z)>0\) (the
verification can be done by contradiction). In general, \(\inf_z d(x-y,z)=0\) if and only if \(x-y \in \overline{N}\).

Next, we shall show that \(\rho(\pi(x),\pi(y))=\rho(\pi(y),\pi(x))\), and it suffices to assume that \(\pi(x) \neq \pi(y)\). Sgince \(d\) is translate invariant, we get \[ \begin{aligned} d(x-y,z)&=d(x-y-z,0) \\ &=d(0,y-x+z) \\ &=d(-z,y-x) \\ &=d(y-x,-z). \end{aligned} \] Therefore the \(\inf\) of the left hand is equal to the one of the right hand. The identity is proved.

Finally, we need to verify the triangle inequality. Let \(r,s,t \in X\). For any \(\varepsilon>0\), there exist some \(z_\varepsilon\) and \(z_\varepsilon'\) such that \[
d(r-s,z_\varepsilon)<\rho(\pi(r),\pi(s))+\frac{\varepsilon}{2},\quad
d(s-t,z'_\varepsilon)<\rho(\pi(s),\pi(t))+\frac{\varepsilon}{2}.
\] Since \(d\) is invariant, we
see \[
\begin{aligned}
d(r-t,z_\varepsilon+z'_\varepsilon)&=d((r-s)+(s-t)-(z_\varepsilon+z'_\varepsilon),0)
\\
&=d([(r-s)-z_\varepsilon]+[(s-t)-z'_\varepsilon],0)
\\
&=d(r-s-z_\varepsilon,t-s+z'_\varepsilon)
\\
&\leq
d(r-s-z_\varepsilon,0)+d(t-s+z'_\varepsilon,0) \\
&=d(r-s,z_\varepsilon)+d(s-t,z'_\varepsilon)
\end{aligned}
\] *(I owe @LeechLattice for
the inequality above.)*

Therefore \[
\begin{aligned}
d(r-t,z_\varepsilon+z'_\varepsilon)&\leq
d(r-s,z_\varepsilon)+d(s-t,z'_\varepsilon) \\
&<\rho(\pi(r),\pi(s))+\rho(\pi(s),\pi(t))+\varepsilon.
\end{aligned}
\] *(Warning: This does not imply that \(\rho(\pi(r),\pi(s))+\rho(\pi(s),\pi(t))=\inf_z
d(r-t,z)\) since we don't know whether it is the lower bound or
not.)*

If \(\rho(\pi(r),\pi(s))+\rho(\pi(s),\pi(t))<\rho(\pi(r),\pi(t))\) however, let \[ 0<\varepsilon<\rho(\pi(r),\pi(t))-(\rho(\pi(r),\pi(s))+\rho(\pi(s),\pi(t))) \] then there exists some \(z''_\varepsilon=z_\varepsilon+z'_\varepsilon\) such that \[ d(r-t,z''_\varepsilon)<\rho(\pi(r),\pi(t)) \] which is a contradiction since \(\rho(\pi(r),\pi(t)) \leq d(r-t,z)\) for all \(z \in N\).

*(We are using the \(\varepsilon\) definition of \(\inf\). See here.)*

Since \(\pi\) is surjective, we see if \(u \in X/N\), there exists some \(a \in X\) such that \(\pi(a)=u\). Therefore \[ \begin{aligned} \rho(\pi(x)+u,\pi(y)+u) &=\rho(\pi(x)+\pi(a),\pi(y)+\pi(a)) \\ &=\rho(\pi(x+a),\pi(y+a)) \\ &=\inf_{z \in N}d(x+a-y-a,z) \\ &=\rho(\pi(x),\pi(y)). \end{aligned} \]

If \(\pi(x)=\pi(x')\) and \(\pi(y)=\pi(y')\), we have to show that \(\rho(\pi(x),\pi(y))=\rho(\pi(x'),\pi(y'))\). In fact, \[ \begin{aligned} \rho(\pi(x),\pi(y)) &\leq \rho(\pi(x),\pi(x'))+\rho(\pi(x'),\pi(y'))+\rho(\pi(y'),\pi(y)) \\ &=\rho(\pi(x'),\pi(y')) \end{aligned} \] since \(\rho(\pi(x),\pi(x'))=0\) as \(\pi(x)=\pi(x')\). Meanwhile \[ \begin{aligned} \rho(\pi(x'),\pi(y')) &\leq \rho(\pi(x'),\pi(x)) + \rho(\pi(x),\pi(y)) + \rho(\pi(y),\pi(y')) \\ &= \rho(\pi(x),\pi(y)). \end{aligned} \] therefore \(\rho(\pi(x),\pi(y))=\rho(\pi(x'),\pi(y'))\).

By proving this, we need to show that a set \(E \subset X/N\) is open with respect to \(\tau_N\) if and only if \(E\) is a union of open balls. But we need to show a generalized version:

If \(\mathscr{B}\) is a local base for \(\tau\), then the collection \(\mathscr{B}_N\), which contains all sets \(\pi(V)\) where \(V \in \mathscr{B}\), forms a local base for \(\tau_N\).

*Proof.* We already know that \(\pi\) is continuous, linear and open.
Therefore \(\pi(V)\) is open for all
\(V \in \mathscr{B}\). For any open set
around \(E \subset X/N\) containing
\(\pi(0)\), we see \(\pi^{-1}(E)\) is open, and we have \[
\pi^{-1}(E)=\bigcup_{V\in\mathscr{B}}V
\] and therefore \[
E=\bigcup_{V \in \mathscr{B}}\pi(V).
\]

Now consider the local base \(\mathscr{B}\) containing all open balls around \(0 \in X\). Since \[ \pi(\{x:d(x,0)<r\})=\{u:\rho(u,\pi(0))<r\} \] we see \(\rho\) determines \(\mathscr{B}_N\). But we have already proved that \(\rho\) is invariant; hence \(\mathscr{B}_N\) determines \(\tau_N\).

Once this is proved, we are able to claim that, if \(X\) is a \(F\)-space, then \(X/N\) is still a \(F\)-space, since its topology is induced by a complete invariant metric \(\rho\).

*Proof.* Suppose \((x_n)\) is
a Cauchy sequence in \(X/N\), relative
to \(\rho\). There is a subsequence
\((x_{n_k})\) with \(\rho(x_{n_k},x_{n_{k+1}})<2^{-k}\).
Since \(\pi\) is surjective, we are
able to pick some \(z_k \in X\) such
that \(\pi(z_k) = x_{n_k}\) and such
that \[
d(z_{k},z_{k+1})<2^{-k}.
\] (The existence can be verified by contradiction still.) By the
inequality above, we see \((z_k)\) is
Cauchy (can you see why?). Since \(X\)
is complete, \(z_k \to z\) for some
\(z \in X\). By the
**continuity** of \(\pi\),
we also see \(x_{n_k} \to \pi(z)\) as
\(k \to \infty\). Therefore \((x_{n_k})\) converges. Hence \((x_n)\) converges since it has a convergent
subsequence. \(\rho\) is complete.

This fact will be used to prove some corollaries in the open mapping theorem. For instance, for any continuous linear map \(\Lambda:X \to Y\), we see \(\ker(\Lambda)\) is closed, therefore if \(X\) is a \(F\)-space, then \(X/\ker(\Lambda)\) is a \(F\)-space as well. We will show in the future that \(X/\ker(\Lambda)\) and \(\Lambda(X)\) are homeomorphic if \(\Lambda(X)\) is of the second category.

There are more properties that can be inherited by \(X/N\) from \(X\). For example, normability, metrizability, local convexity. In particular, if \(X\) is Banach, then \(X/N\) is Banach as well. To do this, it suffices to define the quotient norm by \[ \lVert \pi(x) \rVert = \inf\{\lVert x-z \rVert:z \in N\}. \]