The Fourier transform of sinx/x and (sinx/x)^2 and more

In this post

We are going to evaluate the Fourier transform of \(\frac{\sin{x}}{x}\) and \(\left(\frac{\sin{x}}{x}\right)^2\). And it turns out to be a comprehensive application of many elementary theorems in real and complex analysis. It is a good thing to make sure that you can compute and understand all the identities in this post by yourself in the end. Also, you are expected to be able to recall what all words in italics mean.

To be clear, by the Fourier transform of \(f\) we actually mean

\[ \hat{f}(t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(x)e^{-itx}dx. \]

This is a matter of convenience. Indeed, the coefficient \(\frac{1}{\sqrt{2\pi}}\) is superfluous, but without this coefficient, when computing the Fourier inverse, one has to write \(\frac{1}{2\pi}\) on the other side. Instead of making the transform-inverse unbalanced, we write \(\frac{1}{\sqrt{2\pi}}\) all the time and pretend it is not here.

We say a function \(f\) is in \(L^1\) if \(\int_{-\infty}^{+\infty}|f(x)|dx<+\infty\). As a classic exercise in elementary calculus, one can show that \(\frac{\sin{x}}{x} \not\in L^1\) but \(\left(\frac{\sin{x}}{x}\right)^2 \in L^1\).

Problem 1

For real \(t\), find the following limit:

\[ \lim_{A \to \infty}\int_{-A}^{A}\frac{\sin{x}}{x}e^{itx}dx. \]

Since \(\frac{\sin{x}}{x}e^{itx}\not\in L^1\), we cannot evaluate the integral over \(\mathbb{R}\) in the ordinary sense (the reader can safely ignore this if he or she has no background in Lebesgue integration at this moment, but do keep in mind that being in \(L^1\) is a big matter). However, for given \(A>0\), the integral over \([-A,A]\) is defined, and we evaluate this limit as \(A \to \infty\) to get what we want (by abuse of language). The reader is highly encouraged to write down calculation and supply pictures that should've been here.

We will do this using contour integration. Since the complex function \(f(z)=\frac{\sin{z}}{z}e^{itz}\) is entire, by Cauchy's theorem, its integral over \([-A,A]\) is equal to the one over the path \(\Gamma_A\) by going from \(-A\) to \(-1\) along the real axis, from \(-1\) to \(1\) along the lower half of the unit circle, and from \(1\) to \(A\) along the real axis (why?). Since the path \(\Gamma_A\) avoids the origin, we are safe to use the identity

\[ 2i\sin{z}=e^{iz}-e^{-iz}. \]

Replacing \(\sin{z}\) with \(\frac{1}{2i}(e^{itz}-e^{-itz})\), we get

\[ I_A(t)=\int_{\Gamma_A}f(z)dz=\int_{\Gamma_A}\frac{1}{2iz}(e^{i(t+1)z}-e^{i(t-1)z})dz. \]

If we put \(\varphi_A(t)=\int_{\Gamma_A}\frac{1}{2iz}e^{itz}dz\), we see \(I_A(t)=\varphi_A(t+1)-\varphi_A(t-1)\). It is convenient to divide \(\varphi_A\) by \(\pi\) since we therefore get

\[ \frac{1}{\pi}\varphi_A(t)=\frac{1}{2\pi i}\int_{\Gamma_A}\frac{e^{itz}}{z}dz \]

and we are cool with the divisor \(2\pi i\).

Now, close the path \(\Gamma_A\) in two ways. First, by the semicircle from \(A\) to \(-Ai\) to \(-A\); second, by the semicircle from \(A\) to \(Ai\) to \(-A\), which finishes a circle with radius \(A\). For simplicity we denote the two paths by \(\Gamma_U\) and \(\Gamma_L\). Again by the Cauchy theorem, the first case gives us an integral with value \(0\), thus by Cauchy's theorem,

\[ \frac{1}{\pi}\varphi_A(t)=\frac{1}{2\pi i}\int_{-\pi}^{0}\frac{\exp{(itAe^{i\theta})}}{Ae^{i\theta}}dAe^{i\theta}=\frac{1}{2\pi}\int_{-\pi}^{0}\exp{(itAe^{i\theta})}d\theta. \]

Notice that

\[ \begin{aligned} |\exp(itAe^{i\theta})|&=|\exp(itA(\cos\theta+i\sin\theta))| \\ &=|\exp(itA\cos\theta)|\cdot|\exp(-At\sin\theta)| \\ &=\exp(-At\sin\theta) \end{aligned} \]

hence if \(t\sin\theta>0\), we have \(|\exp(iAte^{i\theta})| \to 0\) as \(A \to \infty\). When \(-\pi < \theta <0\) however, we have \(\sin\theta<0\). Therefore we get

\[ \frac{1}{\pi}\varphi_{A}(t)=\frac{1}{2\pi}\int_{-\pi}^{0}\exp(itAe^{i\theta})d\theta \to 0\quad (A \to \infty,t<0). \]

(You should be able to prove the convergence above.) Also trivially

\[ \varphi_A(0)=\frac{1}{2}\int_{-\pi}^{0}1d\theta=\frac{\pi}{2}. \]

But what if \(t>0\)? Indeed, it would be difficult to obtain the limit using the integral over \([-\pi,0]\). But we have another path, namely the upper one.

Note that \(\frac{e^{itz}}{z}\) is a meromorphic function in \(\mathbb{C}\) with a pole at \(0\). For such a function we have

\[ \frac{e^{itz}}{z}=\frac{1}{z}\left(1+itz+\frac{(itz)^2}{2!}+\cdots\right)=\frac{1}{z}+it+\frac{(it)^2z}{2!}+\cdots. \]

which implies that the residue at \(0\) is \(1\). By the residue theorem,

\[ \begin{aligned} \frac{1}{2\pi{i}}\int_{\Gamma_L}\frac{e^{itz}}{z}dz&=\frac{1}{2\pi{i}}\int_{\Gamma_A}\frac{e^{itz}}{z}dz+\frac{1}{2\pi}\int_{0}^{\pi}\exp(itAe^{i\theta})d\theta \\ &=1\cdot\operatorname{Ind}_{\Gamma_L}(0)=1. \end{aligned} \]

Note that we have used the change-of-variable formula as we did for the upper one. \(\operatorname{Ind}_{\Gamma_L}(0)\) denotes the winding number of \(\Gamma_L\) around \(0\), which is \(1\) of course. The identity above implies

\[ \frac{1}{\pi}\varphi_A(t)=1-\frac{1}{2\pi}\int_{0}^{\pi}\exp{(itAe^{i\theta})}d\theta. \]

Therefore, when \(t>0\), since \(\sin\theta>0\) when \(0<\theta<\pi\), we get

\[ \frac{1}{\pi}\varphi_A(t)\to 1 \quad(A \to \infty,t>0). \]

But as is already shown, \(I_A(t)=\varphi_A(t+1)-\varphi_A(t-1)\). To conclude,

\[ \lim_{A\to\infty}I_A(t)= \begin{cases} \pi\quad &|t|<1, \\ 0 \quad &|t|>1 ,\\ \frac{1}{2\pi} \quad &|t|=1. \end{cases} \]

What we can learn from this integral

Since \(\psi(x)=\left(\frac{\sin{x}}{x}\right)\) is even, dividing \(I_A\) by \(\sqrt{\frac{1}{2\pi}}\), we actually obtain the Fourier transform of \(\psi\) by abuse of language. We also get

\[ \hat\psi(t)= \begin{cases} \sqrt{\frac{\pi}{2}}\quad & |t|<1, \\ 0 \quad & |t|>1, \\ \frac{1}{2\pi\sqrt{2\pi}} & |t|=1. \end{cases} \]

Note that \(\hat\psi(t)\) is not continuous, let alone being uniformly continuous. Therefore, \(\psi(x) \notin L^1\). The reason is, if \(f \in L^1\), then \(\hat{f}\) is uniformly continuous (proof). Another interesting fact is, this also gives us the value of the Dirichlet integral since we have

\[ \begin{aligned} \int_{-\infty}^{\infty}\left(\frac{\sin{x}}{x}\right)dx&=\int_{-\infty}^{\infty}\left(\frac{\sin{x}}{x}\right)e^{0\cdot ix}dx \\ &=\sqrt{2\pi}\hat\psi(0) \\ &=\pi. \end{aligned} \]

We end this section by evaluating the inverse of \(\hat\psi(t)\). The calculation is not that difficult. Now you can see why we put \(\sqrt\frac{1}{2\pi}\).

\[ \begin{aligned} \sqrt{\frac{1}{2\pi}}\int_{-\infty}^{\infty}\hat\psi(t)e^{itx}dt &= \sqrt{\frac{1}{2\pi}}\int_{-1}^{1}\sqrt{\frac{\pi}{2}}e^{itx}dt \\ &=\frac{1}{2}\cdot\frac{1}{ix}(e^{ix}-e^{-ix}) \\ &=\frac{\sin{x}}{x}. \end{aligned} \]

Problem 2

For real \(t\), compute

\[ J=\int_{-\infty}^{\infty}\left(\frac{\sin{x}}{x}\right)^2e^{itx}dx. \]

Now since \(h(x)=\frac{\sin^2{x}}{x^2} \in L^1\), we are able to say with ease that the integral above is the Fourier transform of \(h(x)\) (multiplied by \(\sqrt{2\pi}\)). But still we will be using the limit form

\[ J(t)=\lim_{A \to \infty}J_A(t) \]


\[ J_A(t)=\int_{-A}^{A}\left(\frac{\sin{x}}{x}\right)^2e^{itx}dx. \]

And we are still using the contour integration as above (keep \(\Gamma_A\), \(\Gamma_U\) and \(\Gamma_L\) in mind!). For this we get

\[ \left(\frac{\sin z}{z}\right)^2e^{itz}=\frac{e^{i(t+2)z}+e^{i(t-2)z}-2e^{itz}}{-4z^2}. \]

Therefore it suffices to discuss the function

\[ \mu_A(z)=\int_{\Gamma_A}\frac{e^{itz}}{2z^2}dz \]

since we have

\[ J_A(t)=\mu_A(t)-\frac{1}{2}(\mu_A(t+2)-\mu_A(t-2)). \]

Dividing \(\mu_A(z)\) by \(\frac{1}{\pi i}\), we see

\[ \frac{1}{\pi i}\mu_A(t)=\frac{1}{2\pi i}\int_{\Gamma_A}\frac{e^{itz}}{z^2}dz. \]

An integration of \(\frac{e^{itz}}{z^2}\) over \(\Gamma_L\) gives

\[ \begin{aligned} \frac{1}{\pi i}\mu_A(z)&=\frac{1}{2\pi i}\int_{-\pi}^{0}\frac{\exp(itAe^{i\theta})}{A^2e^{2i\theta}}dAe^{i\theta} \\ &=\frac{1}{2\pi}\int_{-\pi}^{0}\frac{\exp(itAe^{i\theta})}{Ae^{i\theta}}d\theta. \end{aligned} \]

Since we still have

\[ \left|\frac{\exp(itAe^{i\theta})}{Ae^{i\theta}}\right|=\frac{1}{A}\exp(-At\sin\theta), \]

if \(t<0\) in this case, \(\frac{1}{\pi i}\mu_A(z) \to 0\) as \(A \to \infty\). For \(t>0\), integrating along \(\Gamma_U\), we have

\[ \frac{1}{\pi i}\mu_A(t)=it-\frac{1}{2\pi}\int_{0}^{\pi}\frac{\exp(itAe^{i\theta})}{Ae^{i\theta}}d\theta \to it \quad (A \to \infty) \]

We can also evaluate \(\mu_A(0)\) by computing the integral but we are not doing that. To conclude,

\[ \lim_{A \to\infty}\mu_A(t)=\begin{cases} 0 \quad &t>0, \\ -\pi t \quad &t<0. \end{cases} \]

Therefore for \(J_A\) we have

\[ J(t)=\lim_{A \to\infty}J_A(t)=\begin{cases} 0 \quad &|t| \geq 2, \\ \pi(1+\frac{t}{2}) \quad &-2<t \leq 0, \\ \pi(1-\frac{t}{2}) \quad & 0<t <2. \end{cases} \]

Now you may ask, how to find the value of \(J(t)\) at \(0\), \(2\) or \(-2\)? \(\mu_A(0)\) is not even evaluated. But \(h(t) \in L^1\), \(\hat{h}(t)=\sqrt{\frac{1}{2\pi}}J(t)\) is uniformly continuous (!), thus continuous, and the values at these points follows from continuity.

What we can learn from this integral

Again, we get the value of a classic improper integral by

\[ \int_{-\infty}^{\infty}\left(\frac{\sin{x}}{x}\right)^2dx = J(0)=\pi. \]

And this time it's not hard to find the Fourier inverse:

\[ \begin{aligned} \sqrt{\frac{1}{2\pi}}\int_{-\infty}^{\infty}\hat{h}(t)e^{itx}dt&=\frac{1}{2\pi}\int_{-\infty}^{\infty}J(t)e^{itx}dt \\ &=\frac{1}{2\pi}\int_{-2}^{2}\pi(1-\frac{1}{2}|t|)e^{itx}dt \\ &=\frac{e^{2ix}+e^{-2ix}-2}{-4x^2} \\ &=\frac{(e^{ix}-e^{-ix})^2}{-4x^2} \\ &=\left(\frac{\sin{x}}{x}\right)^2. \end{aligned} \]

More properties of zeros of an entire function

What's going on again

In this post we discussed the topological properties of the zero points of an entire nonzero function, or roughly, how those points look like. The set of zero points contains no limit point, and at most countable (countable or finite). So if it's finite, then we can find them out one by one. For example, the function \(f(z)=z\) has simply one zero point. But what if it's just countable? How fast the number grows?

Another question. Suppose we have an entire function \(f\), and the zeros of \(f\), namely \(z_1,z_2,\cdots,z_n\), are ordered increasingly by moduli: \[ |z_1| \leq |z_2| \leq \cdots \leq |z_n| \leq \cdots \] Is it possible to get a fine enough estimation of \(|z_n|\)? Interesting enough, we can get there with the help of Jensen's formula.

Jensen's formula

Suppose \(\Omega=D(0;R)\), \(f \in H(\Omega)\), \(f(0) \neq 0\), \(0<r<R\), and \(z_1,z_2,\cdots,z_{n(r)}\) are the zeros of \(f\) in \(\overline{D}(0;R)\), then \[ |f(0)|\prod_{n=1}^{n(r)}\frac{r}{|z_n|}=\exp\left[\frac{1}{2\pi}\int_{-\pi}^{\pi}\log|f(re^{i\theta})|d\theta\right] \]

There is no need to worry about the assumption \(f(0) \neq 0\). Take another look at this proof. Every zero point \(a\) has a unique positive number \(m\) such that \(f(z)=(z-a)^mg(z)\) and \(g \in H(\Omega)\) but \(g(a) \neq 0\). The number \(m\) is called the order of the zero at \(a\). Therefore if we have \(f(0)=0\) we can simply consider another function, namely \(\frac{f}{z^m}\) where \(m\) is the order of zero at \(0\).

We are not proving this identity at this point. But it can be done by considering the following function \[ g(z)=f(z)\prod_{n=1}^{m}\frac{r^2-\overline{z}_nz}{r(z_n-z)}\prod_{n=m+1}^{n(r)}\frac{z_n}{z_n-z} \] where \(m\) is found by ordering \(z_j\) in such a way that \(z_1,\cdots,z_m \in D(0;r)\) and \(|z_{m+1}|=\cdots=|z_{n}|\). One can prove this identity by considering \(|g(0)|\) as well as \(\log|g(re^{i\theta})|\).

Several applications

The number of zeros of \(f\) in \(\overline{D}(0;r)\)

For simplicity we shall assume \(f(0)=1\) which has no loss of generality. Let \[ M(r)=\sup_{\theta}|f(re^{i\theta})|\quad 0<r<\infty \] and \(n(r)\) be the number of zeros of \(f\) in \(\overline{D}(0;r)\). By the maximum modulus theorem, we have \[ \log|f(2re^{i\theta})| \leq |f(2re^{i\theta})| \leq M(2r) \] If we insert Jensen's formula into this inequality and order \(|z_n|\) by increasing moduli, we get \[ \log M(2r) \geq \frac{1}{2\pi}\int_{-\pi}^{\pi}\log|f(2re^{i\theta})|d\theta=\sum_{n=1}^{n(2r)}\log\frac{2r}{|z_n|}\geq\sum_{n=1}^{n(r)}\log\frac{2r}{|z_n|}\geq n(r)\log2 \] Which implies \[ n(r)\leq\log_2M(2r) \] So \(n(r)\) is controlled by \(M(2r)\). The second and third inequalities look tricky, which require more explanation.

First we should notice the fact that \(z_n \in \overline{D}(0;R)\) for all \(R \in \mathbb{R}\). Hence we have \(\log\frac{2r}{|z_n|} \geq \log1=0\) for all \(z_n \in \overline{D}(0;R)\). Hence the second inequality follows. For the third one, we simply have \[ \sum_{n=1}^{n(r)}\log\frac{2r}{|z_n|}=\sum_{n=1}^{n(r)}(\log2+\log\frac{r}{|z_n|}) \geq n(r)\log2. \] So this is it, the rapidity with which \(n(r)\) can grow is dominated by \(M(r)\). Namely, the number of zeros of \(f\) in the closed disc with radius \(r\) is controlled by the maximum modulus of \(f\) on a circle with bigger radius.

Examples based on different \(M(r)\)

Let's begin with a simple example. Let \(f(z)=1\), we have \(M(r)=1\) for all \(r\), but also we have \(n(r)=0\), in which sense this estimation does nothing. Indeed, as long as \(M(r)\) is bounded by a constant, which implies \(f(z)\) is bounded, then by Liouville's theorem, \(f(z)\) is constant and this estimation is not available.

But if \(M(r)\) grows properly, things become interesting. For example, if we have \[ M(r) \leq \exp(Ar^k) \] where \(A\) and \(k\) are given positive numbers, we have a good enough estimation by \[ n(r) \leq \frac{A+(2r)^k}{\log2} \] This estimation becomes interesting if we consider the logarithm of \(n(r)\) and \(r\), that is \[ \begin{aligned} \limsup_{r\to\infty}\frac{\log{n(r)}}{\log{r}} &\leq \lim_{r\to\infty} \frac{\log(A+(2r)^k)-\log{2}}{\log{r}} \\ & =k \end{aligned} \] If we have \(f(z)=1-\exp(z^k)\) where \(k\) is a positive integer, we have \(n(r) \sim \frac{kr^k}{\pi}\), also \[ \lim_{r\to\infty}\frac{\log{n(r)}}{\log r}=k \]

Lower bound of \(|z_{n(r)}|\)

We'll see here, how to evaluate the lower bound of \(|z_{n(r)}|\) using Jensen's formula, provided that \(M(r)\), or simply the upper bound of \(f(z)\) is properly described. Without loss of generality we shall assume that \(f(0)=1\). Also, we assume that the zero points of \(f(z)\) are ordered by increasing moduli.

First we still consider \[ M(r) \leq \exp(Ar^k) \] and see what will happen.

By Jensen's, we have \[ \prod_{n=1}^{n(r)}\frac{r}{|z_n|}=\exp\left[\frac{1}{2\pi}\int_{-\pi}^{\pi}\log|f(re^{i\theta})|d\theta\right] \leq \exp{Ar^k} \] This gives \[ \prod_{n=1}^{n(r)}|z_n| \geq r^{n(r)}\exp(-Ar^k) \] By the arrangement of \(\{z_n\}\), we have \[ |z_{n(r)}| \geq \sqrt[n(r)]{\prod_{n=1}^{n(r)}|z_n|}\geq r\exp(-Ar^{k-n(r)}) \]

Another example is when we have \[ |f(z)| \leq \exp(A|\Im{z}|) \] where \(\Im{z}\) means the imagine part of \(z\).

We shall notice that in this case, \[ \begin{aligned} \frac{1}{2\pi}\int_{-\pi}^{\pi}\log|f(re^{i\theta})|d\theta &\leq \frac{1}{2\pi}\int_{-\pi}^{\pi}A|r\sin\theta|d\theta=\frac{2Ar}{\pi} \end{aligned} \] Following Jensen's formula, we therefore have \[ |z_{n(r)}| \geq \exp(\frac{2A}{\pi}r^{1-n(r)}) \]

Topological properties of the zeros of a holomorphic function

What's going on

If for every \(z_0 \in \Omega\) where \(\Omega\) is a plane open set, the limit \[ \lim_{z \to z_0}\frac{f(z)-f(z_0)}{z-z_0} \] exists, we say that \(f\) is holomorphic (a.k.a. analytic) in \(\Omega\). If \(f\) is holomorphic in the whole plane, it's called entire. The class of all holomorphic functions (denoted by \(H(\Omega)\)) has many interesting properties. For example it does form a ring.

But what happens if we talk about the points where \(f\) is equal to \(0\)? Is it possible to find an entire function \(g\) such that \(g(z)=0\) if and only if \(z\) is on the unit circle? The topological property we will discuss in this post answers this question negatively.


Suppose \(\Omega\) is a region, the set \[ Z(f)=\{z_0\in\Omega:f(z_0)=0\} \] is a at most countable set without limit point, as long as \(f\) is not identically equal to \(0\) on \(\Omega\).

Trivially, if \(f(\Omega)=\{0\}\), we have \(Z(f)=\Omega\). The set of unit circle is not at most countable and every point is a limit point. Hence if an entire function is equal to \(0\) on the unit circle, then the function equals to \(0\) on the whole plane.

Note: the connectivity of \(\Omega\) is important. For example, for two disjoint open sets \(\Omega_0\) and \(\Omega_1\), define \(f(z)=0\) on \(\Omega_0\) and \(f(z)=1\) on \(\Omega_1\), then everything fails.

A simple application (Feat. Baire Category Theorem)

Before establishing the proof, let's see what we can do using this result.

Suppose that \(f\) is an entire function, and that in every power series \[ f(z)=\sum c_n(z-a)^n \] has at leat one coefficient is \(0\), then \(f\) is a polynomial.

Clearly we have \(n!c_n=f^{(n)}(a)\), thus for every \(a \in \mathbb{C}\), we can find a postivie integer \(n_0\) such that \(f^{(n_0)}(a)=0\). Thus we establish the identity: \[ \bigcup_{n=0}^{\infty} Z(f^{(n)})=\mathbb{C} \] Notice the fact that \(f^{(n)}\) is entire. So \(Z(f^{n})\) is either an at most countable set without limit point, or simply equal to \(\mathbb{C}\). If there exists a number \(N\) such that \(Z(f^{N})=\mathbb{C}\), then naturally \(Z(f^{n})=\mathbb{C}\) holds for all \(n \geq N\). Whilst we see that \(f\)'s power series has finitely many nonzero coefficients, thus polynomial.

So the question is, is this \(N\) always exist? Being an at most countable set without limit points , \(Z(f^{(n)})\) has empty interior (nowhere dense). But according to Baire Category Theorem, \(\mathbb{C}\) could not be a countable union of nowhere dense sets (of the first category if you say so). This forces the existence of \(N\).


The proof will be finished using some basic topology techniques.

Let \(A\) be the set of all limit points of \(Z(f)\) in \(\Omega\). The continuity of \(f\) shows that \(A \subset Z(f)\). We'll show that if \(A \neq \varnothing\), then \(Z(f)=\Omega\).

First we claim that if \(a \in A\), then \(a \in \bigcap_{n \geq 0}Z(f^{(n)})\). That is, \(f^{(k)}(a) = 0\) for all \(k \geq 0\). Suppose this fails, then there is a smallest positive integer \(m\) such that \(c_m \neq 0\) for the power series on the disc \(D(a;r)\): \[ f(z)=\sum_{n=1}^{\infty}c_n(z-a)^{n}. \]


\[ \begin{aligned} ​ g(z)=\begin{cases} ​ (z-a)^{-m}f(z)\quad&(z\in\Omega-\{a\}) \\\ ​ c_m\quad&(z=a) ​ \end{cases} \end{aligned} \]

It's clear that \(g \in H(D(a;r))\) since we have \[ g(z)=\sum_{n=1}^{\infty}c_{m+n}(z-a)^{n}\quad(z\in D(a;r)) \]

But the continuity shows that \(g(a)=0\) while \(c_m \neq 0\). A contradiction.

Next fix a point \(b \in \Omega\). Choose a curve (continuous mapping) defined \(\gamma\) on \([0,1]\) such that \(\gamma(0)=a\) and \(\gamma(1)=b\). Let

\[ \Gamma=\{t\in[0,1]:\gamma(t)\in\bigcap_{n \geq 0}Z(f^{(n)})\} \] By hypothesis, \(0 \in \Gamma\). We shall prove that \(1 \in \Gamma\). Let \[ s = \sup\Gamma \] There exists a sequence \(\{t_n\}\subset\Gamma\) such that \(t_n \to s\). The continuity of \(f^{(k)}\) and \(\gamma\) shows that \[ f^{(k)}(\gamma(s))=0 \]

Hence \(s \in \Gamma\). Choose a disc \(D(\gamma(s);\delta)\subset\Omega\). On this disc, \(f\) is represented by its power series but all coefficients are \(0\). It follows that \(f(z)=0\) for all \(z \in D(\gamma(s);\delta)\). Further, \(f^{(k)}(z)=0\) for all \(z \subset D(\gamma(s);\delta)\) for all \(k \geq 0\). Therefore by the continuity of \(\gamma\), there exists \(\varepsilon>0\) such that \(\gamma(s-\varepsilon,s+\varepsilon)\subset D(\gamma(s);\delta)\), which implies that \((s-\varepsilon, s+\varepsilon)\cap[0,1]\subset\Gamma\). Since \(s=\sup\Gamma\), we have \(s=1\), therefore \(1 \in \Gamma\).

So far we showed that \(\Omega = \bigcap_{n \geq 0}Z(f^{(n)})\), which forces \(Z(f)=\Omega\). This happens when \(Z(f)\) contains limit points, which is equivalent to what we shall prove.

When \(Z(f)\) contains no limit point, all points of \(Z(f)\) are isolated points; hence in each compact subset of \(\Omega\), there are at most finitely many points in \(Z(f)\). Since \(\Omega\) is \(\sigma\)-compact, \(Z(f)\) is at most countable. \(Z(f)\) is also called a discrete set in this situation.