A Step-by-step of the Analytic Continuation of the Riemann Zeta Function


The Riemann zeta function is widely known:

\[ \zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}. \]

It is widely known mainly because of the celebrated hypothesis by Riemann that remains unsolved after more than a century's attempts by mathematicians and 150 million attempts by computers:

Riemann Hypothesis: The non-trivial zeros of \(\zeta(s)\) lie on the line \(\Re(s)=\frac{1}{2}\).

People are told by pop-science how important and mysterious this hypothesis is. Or how disastrous if this would be solved one day. We can put them aside. A question is, why would Riemann ever think about the zero set of such a function? Perhaps something else? According to Riemann, the distribution function of primes

\[ \pi(x)=\sum_{p\text{ prime}}^{p \le x}1 \]

may be written as the series

\[ \pi(x)=R(x)-\sum_{\rho}R(x^\rho) \]


\[ R(x)=1+\sum_{n=1}^{\infty}\frac{1}{n\zeta(n+1)}\frac{(\log{x})^n}{n!} \]

and \(\rho\) varies over all zeros of \(\zeta(s)\). With these being said, once this hypothesis is proven true, we may have a much more concrete say of the distribution of prime numbers.

But this is not the topic of this post actually. The author of this post is not trying to prove the Riemann Hypothesis in a few pages, and nobody could. In this post, we investigate the analytic continuation of \(\zeta(s)\) step-by-step, so that it will make sense to even think about evaluating the value at \(\frac{1}{2}\). For the theory of analytic continuation, I recommend Real and Complex Analysis by Walter Rudin. Although in his book he went into modular function and Picard's little theorem, instead of \(\zeta(s)\) function and related.

A sketch of our procedure follows. The function \(\zeta(s)\) does not ring a bell of power series, although a straightforward observation shows that \(\sum_{n=1}^{\infty}\frac{1}{n^{s}}\) represents an analytic function in the half-plane \(\Re(s)>1\). We need to develop tools that can easily be utilised into the study of the zeta function. Our two main tools are the Gamma function and Mellin transform.

With these two tools being developed, we will observe the so-called complete zeta function, which will bring us to THE continuation we are looking for.

We will carry out details more about non-trivial processes, instead of basic complex analysis. The reader may skip our preparation if they are familiar with these content.

Gamma Function

The Gamma function should be studied in an analysis course:

\[ \Gamma(s)=\int_0^\infty e^{-t}t^{s-1}dt,s>0. \]

In an analysis course we have studied some of this function's important properties:

  • \(\Gamma(1)=1\).

  • \(\Gamma(s+1)=s\Gamma(s)\) (as a result \(n!=\Gamma(n+1)\))

  • \(\log\Gamma(s)\) is a convex function.

In this section however, we will study it in the context of complex analysis.

Theorem 1. The Gamma function

\[ \Gamma(s)=\int_0^\infty e^{-t}t^{s-1}dt \]

is well-defined as an analytic function in the half plane \(\Re(s)>0\).

Proof. If we write \(s=u+iv\) with \(u>0\) and \(t=e^c\), then

\[ \begin{aligned} |e^{-t}t^{s-1}|&=|e^{-t}t^{u-1}||t^{iv}| \\ &=|e^{-t}t^{u-1}||e^{icv}| \\ &=e^{-t}t^{u-1}. \end{aligned} \]


\[ \begin{aligned} \int_{0}^{\infty}|e^{-t}t^{s-1}|dt &= \int_0^\infty e^{-t} t^{u-1}dt \\ &=\Gamma(u) \\ &<\infty. \end{aligned} \]

Then other properties follows. \(\square\)

Theorem 2. If \(\Re(s)>0\), then

\[ \Gamma(s+1)=s\Gamma(s), \]

and as a consequence \(\Gamma(n+1)=n!\) for \(n=0,1,\dots\).

Proof. The second statement follows immediately because \(\Gamma(1)=1\). For the first equation, we do a integration by parts:

\[ \int_{\varepsilon}^{1/\varepsilon}\frac{d}{dt}(e^{-t} t^s)dt =-\int_{\varepsilon}^{1/\varepsilon}e^{-t}t^sdt +s\int_{\varepsilon}^{1/\varepsilon}e^{-t}t^{s-1}dt. \]

Taking \(\varepsilon \to 0\), we get what we want. \(\square\)

Now we are ready for the analytic continuation for the Gamma function, which builds a bridge to the analytic continuation of \(\zeta\).

Theorem 3. The function \(\Gamma(s)\) defined in theorem 1 admits an analytic continuation to a meromorphic function on the complex plane whose singularities are simple poles at \(0,-1,\dots\), with corresponding residue \(\frac{(-1)^n}{n!}\).

Proof. It suffices to show that we can \(\Gamma\) to \(\Re(s)>-m\), for all \(m>0\) (hence we can extend it to all the complex plane). For this reason, we put \(\Gamma_0(s)=\Gamma(s)\), which is defined in theorem 1. Then

\[ \Gamma_1(s)=\frac{\Gamma_0(s+1)}{s} \]

is THE analytic continuation of \(\Gamma_0(s)\) at \(\Re(s)>-1\), with the only singularity \(s=0\). Then

\[ \operatorname{Res}_{s=0}\Gamma_1(s)=\lim_{s \to 0}s\Gamma_1(s) =\Gamma_0(1)=1. \]

Likewise, we can define

\[ \Gamma_2(s)=\frac{\Gamma_1(s+1)}{s}=\frac{\Gamma_0(s+2)}{s(s+1)}. \]

Overall, whenever \(m \ge 1\) is an integer, we can define

\[ \Gamma_m(s)=\frac{\Gamma_0(s+m)}{\prod_{j=1}^{m-1}(s-j)}. \]

This function is meromorphic in \(\Re(s)>-m\) and has simple poles at \(s=0,-1,\dots,-m+1\) with residues

\[ \operatorname{res}_{s=-n}\Gamma_m(s)= \frac{\Gamma(-n+m)}{ (m-1-n)!(-1)(-2)\dots(-n) }=\frac{(-1)^n}{n!}. \]

Successive applications of the lemma shows that \(\Gamma_m(s)=\Gamma(s)\) for \(\Re(s)>0\). Therefore we have obtained the analytic continuation through this process. \(\square\)

Throughout, unless specified, we will call the function obtained in the proof of theorem 3 as THE function \(\Gamma\).

For all \(s \in \mathbb{C}\), this function satisfies \(\Gamma(s+1)=s\Gamma(s)\) as it should be.

Before we proceed, we develop two relationship between \(\Gamma\) function and \(\zeta\) function, in an attempt to convince the reader that we are not doing something for nothing.

If we perform a chance of variable \(t=nu\) in the definition of \(\Gamma(s)\), we see

\[ \int_{0}^\infty e^{-nu}n^{s}u^{s-1}du=\Gamma(s). \]

This is to say,

\[ \begin{aligned} \frac{1}{n^s}\Gamma(s)&=\int_0^\infty e^{-nu}u^{s-1}du \\ \end{aligned} \]

Taking the sum of all \(n\), we see

\[ \begin{aligned} \Gamma(s)\sum_{n=1}^{\infty}\frac{1}{n^s}&=\Gamma(s)\zeta(s) \\ &= \sum_{n=1}^{\infty}\int_0^\infty e^{-nu}u^{s-1}du \\ &=\int_0^\infty \sum_{n=1}^{\infty}e^{-nu}u^{s-1}du \\ &=\int_0^{\infty}\frac{e^{-u}u^{s-1}}{1-e^{-u}}du \\ &=\int_0^{\infty}\frac{u^{s-1}}{e^u-1}du. \end{aligned} \]

This relationship is beautiful, but may make our computation a little bit more complicated. However, if we get our hand dirty earlier, our study will be easier. Thus we will do a "uglier" change of variable \(t \mapsto \pi n^2y\) to obtain

\[ \pi^{-s}\Gamma(s)\frac{1}{n^{2s}}=\int_0^\infty e^{-\pi n^{2}y }y^{s-1}dy \]

which implies

\[ \pi^{-s}\Gamma(s)\zeta(2s)=\int_0^\infty \sum_{n=1}^{\infty} e^{-\pi n^2y}y^{s-1}dy. \]

Either case, it is legal to change the order of summation and integration, because of the monotone convergence theorem.

Before we proceed, we need some more properties of the Gamma function.

Theorem 3 (Euler's reflection formula). For all \(s \in \mathbb{C}\),

\[ \Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin\pi s}. \]

Observe that this identity makes sense at all poles. Since \(\Gamma(s)\) has simple poles at \(0,-1,\dots\) meanwhile \(\Gamma(1-s)\) has simple poles at \(1,2,\dots\). As a result, \(\Gamma(s)\Gamma(1-s)\) has simple poles at all integers, a property which is shared by \(\pi/\sin\pi{s}\).

By analytic continuation, it suffices to prove it for \(0<s<1\) because it will be extended to all of \(\mathbb{C}\).

Proof (real version). First of all, observe that

\[ \csc{x}=\frac{1}{x}+\sum_{n=1}^{\infty}\frac{2x}{x^2-n^2\pi^2}. \]

On the other hand, we have

\[ \begin{aligned} \Gamma(x)\Gamma(1-x)&=B(x,1-x) \\ &=\int_0^1 t^{1-x}(1-t)^xdt \\ &=\int_0^\infty \frac{1}{y^x(1+y)}dy \end{aligned} \]

by taking \(t=\frac{1}{1+y}\). Next we compute this integral for both \((0,1]\) and \([1,\infty)\).

\[ \begin{aligned} \int_0^1\frac{1}{y^x(1+y)}dy &= \int_0^1\frac{1}{y^x} \sum_{n=0}^{\infty}(-y)^ndy \\ &= \sum_{n=0}^{\infty}\int_0^1(-y)^{n-x}dy \\ &= \sum_{n=0}^{\infty}\frac{(-1)^{n-1}}{n-x}. \end{aligned} \]

(One shall be disturbed by our exchange of infinite sum and integration due to his or her study in analysis, but will be relaxed after being informed about Arzelà's dominated convergence theorem of Riemann integrals.)

On the other hand, taking \(y=\frac{1}{u}\), we see

\[ \begin{aligned} \int_1^\infty\frac{1}{y^x(1+y)}dy &= \int_0^1\frac{u^{x-1}}{1+u}du \\ &=\frac{1}{x}+\sum_{n=1}^{\infty}\frac{(-1)^n}{n+x} \end{aligned} \]

Summing up, one has

\[ \Gamma(x)\Gamma(1-x)=\frac{1}{x}+\sum_{n=1}^{\infty} (-1)^n\frac{2x}{x^2-n^2}. \]

It remains to show that \(\pi\csc{\pi{x}}\) satisfies such an expansion as well, which is not straightforward because neither Fourier series nor Taylor series can drive us there directly. One can start with the infinite product expansion of \(\sin{x}\) but here we follow an alternative approach. Notice that for \(\alpha \in \mathbb{R} \setminus \mathbb{Z}\),

\[ \cos\alpha{t}=\frac{\sin\pi \alpha}{\pi \alpha} +\sum_{n=1}^{\infty}(-1)^n\frac{2\alpha}{\pi(\alpha^2-n^2)} \sin\alpha\pi\cos{nt}. \]

Taking \(t=0\) and multiplying both sides by \(\pi\csc\pi\alpha\), we obtain what we want. \(\square\)

Proof (complex version). By definition,

\[ \begin{aligned} \Gamma(1-s)\Gamma(s) &= \int_0^\infty e^{-t}t^{s-1}\Gamma(1-s)dt \\ &= \int_0^\infty e^{-t}t^{s-1}\left(\int_0^\infty e^{-v}v^{s}dv\right)dt \\ &= \int_0^\infty e^{-t}t^{s-1}t \left( \int_0^\infty e^{-ut}(ut)^{-s}du \right)dt \\ &= \int_0^\infty du \int_0^\infty e^{-t(u+1)}u^{-s}dt \\ &= \int_0^\infty \frac{u^{-s}}{1+u}du \end{aligned} \]

Here we performed a change-of-variable on \(v=tu\). To compute the last integral, we put \(u=e^x\), and it follows that

\[ \begin{aligned} \int_0^\infty \frac{u^{-s}}{1+u}du &= \int_0^\infty \frac{e^{(1-s)x}}{1+e^x}dx \\ \end{aligned} \]

The integral on the right hand side can be computed to be \(\frac{\pi}{\sin(1-s)\pi}=\frac{s}{\sin\pi s}\). This is a easy consequence of the residue formula (by considering a rectangle with centre \(z=\pi i\), height \(2\pi\) and one side being the real axis). \(\square\)

  • In very much particular, by putting \(s=1/2\), we obtain

\[ \Gamma(1/2)=\sqrt{\pi}. \]

As a bonus of this, by putting \(t=u^2\), we also see

$$ \[\begin{aligned} \Gamma(s)&=\int_0^\infty e^{-u^2}u^{2s-2}2udu \\ &=2\int_0^\infty e^{-u^2}u^{2s-1}ds. \end{aligned}\]



\[ \Gamma(1/2)=2\int_0^\infty e^{-u^2}du=\int_{-\infty}^{\infty}e^{-u^2}du=\sqrt{\pi}. \]

To conclude this section, we mention the

Theorem 4 (Legendre duplication formula).

\[ \Gamma(s)\Gamma(s+1/2)=\frac{2\sqrt\pi}{2^{2s}}\Gamma(2s) \]

One can find a proof here.

Mellin and Fourier Transform of the Jacobi Theta Function

Behaviour of the Jacobi Theta Function

Put \(Z(s)=\pi^{-s/2}\Gamma(s/2)\zeta(s)\). It looks we are pretty close to a great property of \(\zeta(s)\), if we can figure out \(Z\) a little bit more, because \(\pi^{-s/2}\) and \(\Gamma(s/2)\) behave nicely. Therefore we introduce the Jacobi theta function

\[ \theta(s)=\sum_{n \in \mathbb{Z}}e^{-\pi n^2 s}, \quad \Re(s)>0 \]

and try to deduce its relation with \(Z(s)\).

To begin with, we first show that

Proposition 1. The theta function is holomorphic on the right half plane.

Proof. Let \(C\) be a compact subset of the right half plane, and put \(y_0=\inf_{s \in C}\Re(s)\). Pick any \(n_0\ge \frac{1}{y_0}\). For \(s=u+iv \in C\), we have \(u \ge y_0\) and therefore

\[ \begin{aligned} \sum_{|n|\ge n_0}|e^{-\pi n^2 s}| &= \sum_{|n| \ge n_0}e^{-\pi n^2 u} \\ &\le \sum_{|n| \ge n_0}e^{-\pi n^2 y_0} \\ &\le \sum_{|n| \ge n_0}e^{-\pi |n|} \end{aligned} \]

Therefore \(\theta(s)\) converges absolutely on any compact subset of the right half plane. (Note we have used the fact that \(n^2y_0 \ge |n|n_0y_0 \ge |n|\) when we are studying the convergence.) Since each term is holomorphic, we have shown that \(\theta(s)\) itself is holomorphic. \(\square\)

Therefore it is safe to work around theta function. Now we are ready to deduce a functional equation.

Theorem 4. The theta function satisfies the functional equation on \(\{\Re(s)>0\}\):

\[ \theta(s)=\frac{1}{\sqrt{s}}\theta\left(\frac{1}{s}\right) \]

The square root is chosen to be in the branch with positive real part.

Proof. Consider the function \(f(x)=e^{-\pi x^2}\). We know that this is the fixed point of Fourier transform (in this convenient form)

\[ \hat{g}(t)=\int_{-\infty}^{\infty} g(x)e^{-2\pi ixt}dx. \]

Now we put \(g(x)=e^{-\pi u x^2}=f(\sqrt{u}x)\). The Fourier transform of \(g\) is easy to deduce:

\[ \hat{g}(t)=\frac{1}{\sqrt{u}}\hat{f}\left(\frac{t}{\sqrt{u}}\right) = \frac{1}{\sqrt{u}}e^{-\pi t^2 / u}. \]

Since \(g(x)\) is a Schwartz function, by Poisson summation formula, we have

\[ \sum_{n \in \mathbb{Z}}g(n)=\theta(u)=\sum_{n \in \mathbb Z} \hat{g}(n)=\frac{1}{\sqrt{u}}\theta\left(\frac{1}{u}\right). \]

By extending with analytic continuation, we are done. \(\square\)

For Schwartz functions, also known as rapidly decreasing functions, we refer the reader to chapter 7 of W. Rudin's Functional Analysis.

Next we will study the behaviour of \(\theta(s)\) on the half real line, especially at the origin and infinity. By the functional equation above, once we have a better view around the origin, we can quickly know what will happen at the infinity.

Proposition 2. When the real number \(t \to 0\), the theta function is equivalent to \(\frac{1}{\sqrt{t}}\). More precisely, when \(t\) is small enough, the following inequality holds:

\[ \left|\theta(t)-\frac{1}{\sqrt{t}}\right|<e^{-(\pi-1)/t}. \]

Proof. Rewrite \(\theta(t)\) in the form

\[ \theta(t)=1+2\sum_{n=1}^{\infty}e^{-\pi n^2 t}. \]


\[ \begin{aligned} \left|\theta(t)-\frac{1}{\sqrt{t}}\right| &= \left| \frac{1}{\sqrt{t}}\left(\theta\left(\frac{1}{t}\right)-1\right) \right| \\ &= \frac{2}{\sqrt{t}}\sum_{n=1}^{\infty}e^{-\pi n^2/t} \end{aligned} \]

Pick \(t>0\) small enough so that

\[ e^{-1/t}<\frac{\sqrt{t}}{4}, \quad e^{-2\pi/t}<2. \]

It follows that

\[ \begin{aligned} \left|\theta(t)-\frac{1}{\sqrt{t}}\right| &= \frac{2}{\sqrt{t}}\sum_{n=1}^{\infty}e^{-\pi n^2/t} \\ &< \frac{1}{2}e^{1/t}\sum_{n=1}^{\infty}e^{-\pi/t} e^{-\pi(n^2-1)/t} \\ &=\frac{1}{2}e^{-(\pi-1)/t}\sum_{n=1}^{\infty}e^{-\pi(n+1)(n-1)/t}\\ &<\frac{1}{2}e^{-(\pi-1)/t}\sum_{n=1}^{\infty} e^{-2\pi(n-1)/t} \\ &<\frac{1}{2}e^{-(\pi-1)/t}\sum_{n=1}^{\infty} 2^{n-1} \\ &=e^{-(\pi-1)/t}. \end{aligned} \]


As a result, we also know how \(\theta(t)\) behaves at the infinity. To be precise, we have the following corollary.

Corollary 1. The limit of \(\theta(t)\) at infinity is \(1\) in the following sense: when \(t\) is big enough,

\[ |\theta(t)-1| < e^{-(\pi-1)t}/\sqrt{t}. \]

Proof. Let \(t\) be big enough such that \(\frac{1}{t}\) is small enough. That is,

\[ \left|\theta\left(\frac{1}{t}\right)-\sqrt{t}\right| = \left|\sqrt{t}\theta(t)-\sqrt{t}\right|<e^{-(\pi-1)t} \]

according to proposition 2. The result follows. \(\square\)

The Mellin Transform of the Theta Function and the Theta Function

To begin with, we introduce the Mellin transform. In a manner of speaking, this transform can actually be understood as the multiplicative version of the two-sided Laplace transform.

Definition. Given a function \(f:\mathbb{R}_+ \to \mathbb{C}\), the Mellin transform of \(f\) is defined to be

\[ \mathcal{M}_f(s)=\int_0^\infty f(x)x^{s-1}dx, \]

provided that the limit exists.

For example, \(\Gamma(s)\) is the Mellin transform of \(e^x\). Moreover, for the two-side Laplace transform

\[ \mathcal{B}_f(s)=\int_{-\infty}^{+\infty}e^{-sx}f(x)dx, \]

we actually have

\[ \mathcal{M}_f(s)=\mathcal{B}_{\tilde{f}}(s), \]

where \(\tilde{f}(x)=f(e^{-x})\).

Our goal is to recover \(Z(s)\) through the Mellin transform of \(\theta(x)\). As we have proved earlier,

\[ \pi^{-s}\Gamma(s)\zeta(2s)=\int_0^\infty \sum_{n=1}^{\infty} e^{-\pi n^2y}x^{s-1}dx. \]

It seems we can get our result really quick by studying \(\frac{1}{2}(\theta(s)-1)\). However we see \(\theta(x)\) goes to \(\frac{1}{\sqrt{x}}\) rapidly as \(x \to 0\), and goes to \(1\) rapidly as \(x \to \infty\). Convergence has to be taken care of. Therefore we add error correction terms. For this reason, we study the function

\[ \phi(s)=\int_0^1\left(\theta(x)-\frac{1}{\sqrt{x}}\right) x^{s/2-1}dx + \int_1^\infty (\theta(x)-1)x^{s/2-1}dx. \]

We use \(s/2\) in place of \(s\) because we do not want \(\zeta\) to be evaluated at \(2s\) all the time.

The partition \((0,1) \cup (1,\infty)\) immediately inspires one to use the change-of-variable \(y=\frac{1}{x}\). As a result,

\[ \begin{aligned} \phi(s)&=\int_0^1\left(\theta(x)-\frac{1}{\sqrt{x}}\right) x^{s/2-1}dx + \int_1^\infty (\theta(x)-1)x^{s/2-1}dx \\ &= -\int_1^\infty\left( \theta\left(\frac{1}{y}\right)-\sqrt{y} \right)y^{1-s/2}(-y^{-2})dy -\int_0^1\left( \theta\left(\frac{1}{y}\right)-1 \right)y^{1-s/2}(-y^{-2})dy \\ &=\int_1^\infty (\theta(y)-1)y^{(1-s)/2-1}dy + \int_0^1 \left( \theta(y)-\frac{1}{\sqrt{y}} \right)y^{(1-s)/2-1}dy \\ &= \phi(1-s). \end{aligned} \]

Now we are ready to compute \(\phi(s)\). For the first part,

\[ \begin{aligned} \int_0^1\left(\theta(x)-\frac{1}{\sqrt{x}}\right)x^{s/2-1}dx &= \int_0^1\theta(x)x^{s/2-1}dx -\frac{2}{s-1} \\ &=\int_0^1 \sum_{n=-\infty}^{+\infty}e^{-\pi n^2 x}x^{s/2-1}dx \\ &= \int_0^1 x^{s/2-1}dx + 2\sum_{n=1}^{\infty}\int_0^1e^{-\pi n^2 x}x^{s/2-1}dx - \frac{2}{s-1} \\ &= 2\sum_{n=1}^{\infty}\int_0^1e^{-\pi n^2 x}x^{s/2-1}dx +\frac{2}{s}-\frac{2}{s-1}. \end{aligned} \]

On the other hand,

\[ \int_1^\infty (\theta(x)-1)x^{s/2-1}dx = 2\sum_{n=1}^{\infty}\int_1^\infty e^{-\pi n^2 x}x^{s/2-1}dx. \]


\[ \begin{aligned} \phi(s)&=2\sum_{n=1}^{\infty}\int_0^\infty e^{-\pi n^2 x} x^{s/2-1}dx + \frac{2}{s}-\frac{2}{s-1} \\ &=2\left(Z(s)+\frac{1}{s}-\frac{1}{s-1}\right). \end{aligned} \]


\[ Z(s)=\pi^{-s/2}\Gamma(s/2)\zeta(s)=\frac{1}{2}\phi(s)-\frac{1}{s}+\frac{1}{s-1}. \]

In particular,

\[ \begin{aligned} Z(1-s)&=\frac{1}{2}\phi(1-s)-\frac{1}{1-s}+\frac{1}{1-s-1} \\ &=\frac{1}{2}\phi(s)-\frac{1}{s}+\frac{1}{s-1} \\ &=Z(s) \end{aligned} \]

Expanding this equation above, we see

\[ \pi^{-(1-s)/2}\Gamma((1-s)/2)\zeta(1-s) = \pi^{-s/2}\Gamma(s/2)\zeta(s). \]

This gives

\[ \zeta(1-s)=\pi^{\frac{1}{2}-s}\frac{\Gamma\left( \frac{s}{2}\right)}{\Gamma\left(\frac{1-s}{2}\right)}\zeta(s). \]

Finally we try to simplify the quotient above. By Legendre's duplication formula,

\[ \Gamma(s)=\frac{2^s}{2\sqrt{\pi}}\Gamma\left(\frac{s}{2}\right) \Gamma\left(\frac{s+1}{2}\right). \]

By Euler's reflection formula,

\[ \Gamma\left(\frac{1-s}{2}\right)\Gamma\left(\frac{s+1}{2}\right) =\frac{\pi}{\sin\pi\left(\frac{1-s}{2}\right)} =\frac{\pi}{\cos\frac{\pi s}{2}}. \]

Combining these two equations, we obtain

Proposition 3. The Riemann Zeta function \(\zeta(s)\) admits an analytic continuation satisfying the functional equation

\[ \zeta(1-s)=2(2\pi)^{-s}\Gamma(s)\cos\frac{\pi s}{2}\zeta(s). \]

In particular, since we also have

\[ \zeta(s)=\frac{\pi^{s/2}}{\Gamma(s/2)}\left( \phi(s)-\frac{1}{s}+\frac{1}{s-1} \right), \]

it is immediate that \(\zeta(s)\) admits a simple hole at \(s=1\) with residue \(1\). Another concern is \(s=0\). Nevertheless, since we have

\[ \begin{aligned} \zeta(s)&= \frac{\pi^{s/2}}{\Gamma(s/2)}\phi(s)- \frac{\pi^{s/2}}{s\Gamma(s/2)}+\frac{\pi^{s-1}}{\Gamma(s/2)} \\ &=\frac{\pi^{s/2}}{\Gamma(s/2)}\phi(s)- \frac{2\pi^{s/2}}{\Gamma(s/2+1)}+\frac{\pi^{s-1}}{\Gamma(s/2)} \end{aligned} \]

there is no pole at \(s=0\) (notice that \(\phi(s)\) is entire). We now know a little bit more about the analyticity of \(\zeta(s)\).

Corollary 2. The Riemann zeta function \(\zeta(s)\) has its analytic continuation defined on \(\mathbb{C} \setminus \{1\}\), with a simple pole at \(s=1\) with residue \(1\).

What is 1+2+...?

Now we are safe to compute \(\zeta(-1)\).

\[ \zeta(-1)=2(2\pi)^{-2}\Gamma(2)\zeta(2) = \frac{2}{4\pi^2}\cdot 2 \cdot \cos(\pi)\cdot \frac{\pi^2}{6} = - \frac{1}{12}. \]

But I believe, after these long computation of the analytical continuation, we can be confident enough to say that, when \(\Re(s) \le 1\), the Riemann zeta function \(\zeta(s)\) can not remotely be immediately explained by its ordinary definition \(\sum_{n=1}^{\infty}n^{-s}\). Claiming \(1+2+\dots=-\frac{1}{12}\) is a ridiculous abuse of language.

This post ends with Greg Gbur's criticism of the infamous Numberphile video.

So why is this important?  Part of what I’ve tried to show on this blog is that mathematics and physics can be extremely non-intuitive, even bizarre, but that they have their own rules and logic that make perfect sense once you get familiar with them.  The original video, in my opinion, acts more like a magic trick than an explanation: it shows a peculiar, non-intuitive result and tries to pass it off as absolute truth without qualification.  Making science and math look like incomprehensible magic does not do any favors for the scientists who study it nor for the public who would like to understand it.

A Step-by-step of the Analytic Continuation of the Riemann Zeta Function




Posted on


Updated on


Licensed under