# A Step-by-step of the Analytic Continuation of the Riemann Zeta Function

# Introduction

The **Riemann zeta function** is widely known (to be the analytic continuation of Euler’s zeta function):

It is widely known mainly because of the celebrated hypothesis by Riemann that remains unsolved after more than a century’s attempts by mathematicians and 150 million attempts by computers:

Riemann Hypothesis:The non-trivial zeros of $\zeta(s)$ lie on the line $\Re(s)=\frac{1}{2}$.

The audience are told by pop-science how important and mysterious this hypothesis is. Or how disastrous if this would be solved one day. We can put them aside. A question is, why would Riemann ever think about the zero set of *such* a function? Why not something else? According to Riemann, the distribution function of primes

may be written as the series

where

and $\rho$ varies over all zeros of $\zeta(s)$. With these being said, once this *hypothesis* is proven true, we may have a much more concrete say of the distribution of prime numbers.

But this is not the topic of this post actually. The author of this post is not trying to prove the Riemann Hypothesis in a few pages, and nobody could. In this post, we investigate the analytic continuation of $\zeta(s)$ step-by-step, so that it will make sense to even think about evaluating the value at $\frac{1}{2}$. For the theory of analytic continuation, I recommend *Real and Complex Analysis* by Walter Rudin. Although in his book he went into modular function and Picard’s little theorem, instead of $\zeta(s)$ function and related.

We will transfer the problem of $\zeta$ function into $\Gamma$ function and $\theta$ function, and uncover what we want through the Merlin transform, in the sense that we will observe the so-called complete zeta function, which will bring us to THE continuation we are looking for.

We will carry out details more about non-trivial processes, rather than basic complex analysis. The reader may skip our preparation if they are familiar with these contents.

# Gamma Function

The Gamma function should be studied in an analysis course:

In an analysis course we have studied some of this function’s important properties:

$\Gamma(1)=1$.

$\Gamma(s+1)=s\Gamma(s)$ (as a result $n!=\Gamma(n+1)$)

$\log\Gamma(s)$ is a convex function.

In this section however, we will study it in the context of complex analysis.

Theorem 1.The Gamma functionis well-defined as an analytic function in the half plane $\Re(s)>0$.

*Proof.* If we write $s=u+iv$ with $u>0$ and $t=e^c$, then

Therefore

The desired properties then follows. $\square$

Theorem 2.If $\Re(s)>0$, thenand as a consequence $\Gamma(n+1)=n!$ for $n=0,1,\dots$.

*Proof.* The second statement follows immediately because $\Gamma(1)=1$. For the first equation, we do a integration by parts:

Taking $\varepsilon \to 0$, we get what we want. $\square$

Now we are ready for the analytic continuation for the Gamma function, which builds a bridge to the analytic continuation of $\zeta$.

Theorem 3.The function $\Gamma(s)$ defined in theorem 1 admits an analytic continuation to a meromorphic function on the complex plane whose singularities are simple poles at $0,-1,\dots$, with corresponding residue $\frac{(-1)^n}{n!}$.

*Proof.* It suffices to show that we can continuate $\Gamma$ to $\Re(s)>-m$, for all $m>0$, which implies that we can extend it to all the complex plane. For this reason, we put $\Gamma_0(s)=\Gamma(s)$, which is defined in theorem 1. Then

is THE analytic continuation of $\Gamma_0(s)$ at $\Re(s)>-1$, with the only singularity $s=0$. Then

Likewise, we can define

Overall, whenever $m \ge 1$ is an integer, we can define

This function is meromorphic in $\Re(s)>-m$ and has simple poles at $s=0,-1,\dots,-m+1$ with residues

Successive applications of the lemma shows that $\Gamma_m(s)=\Gamma(s)$ for $\Re(s)>0$. Therefore we have obtained the analytic continuation through this process. $\square$

Throughout, unless specified, we will call the function obtained in the proof of theorem 3 as THE function $\Gamma$.

For all $s \in \mathbb{C}$, this function satisfies $\Gamma(s+1)=s\Gamma(s)$ as it should be.

Before we proceed, we develop two relationships between the $\Gamma$ function and the $\zeta$ function, to convince the reader that we are not doing nothing.

If we perform a chance of variable $t=nu$ in the definition of $\Gamma(s)$, we see

This is to say,

Taking the sum of all $n$, we see

This relationship is beautiful, but may make our computation a little bit more complicated. However, if we get our hand dirty earlier, our study will be easier. Thus we will do a “uglier” change of variable $t \mapsto \pi n^2y$ to obtain

which implies

Either case, it is legal to change the order of summation and integration, because of the monotone convergence theorem.

Before we proceed, we need some more properties of the Gamma function.

Theorem 3 (Euler’s reflection formula).For all $s \in \mathbb{C}$,

Observe that this identity makes sense at all poles. Since $\Gamma(s)$ has simple poles at $0,-1,\dots$ meanwhile $\Gamma(1-s)$ has simple poles at $1,2,\dots$. As a result, $\Gamma(s)\Gamma(1-s)$ has simple poles at all integers, a property also shared by $\pi/\sin\pi{s}$.

By analytic continuation, it suffices to prove it for $0<s<1$ because this result can then be extended to all of $\mathbb{C}$ through analytic continuation.

*Proof (real version).* We expand the left hand side first:

by taking $t=\frac{1}{1+y}$. Next we compute this integral for both $(0,1]$ and $[1,\infty)$.

(The exchange of integration and infinite sum is correct due to Arzelà’s dominated convergence theorem of Riemann integrals.)

On the other hand, taking $y=\frac{1}{u}$, we see

Summing up, one has

It remains to show that $\pi\csc{\pi{x}}$ satisfies such an expansion as well, which is not straightforward because neither Fourier series nor Taylor series can drive us there directly. One can start with the infinite product expansion of $\sin{x}$ but here we follow an alternative approach. Notice that for $\alpha \in \mathbb{R} \setminus \mathbb{Z}$,

Taking $t=0$ and multiplying both sides by $\pi\csc\pi\alpha$, we obtain what we want. $\square$

*Proof (complex version).* By definition,

Here we performed a change-of-variable on $v=tu$. To compute the last integral, we put $u=e^x$, and it follows that

The integral on the right hand side can be computed to be $\frac{\pi}{\sin(1-s)\pi}=\frac{s}{\sin\pi s}$. This is a easy consequence of the residue formula (by considering a rectangle with centre $z=\pi i$, height $2\pi$ and one side being the real axis). $\square$

Here is a bonus of Euler’s reflection formula. Putting $t=u^2$ in the definition of $\Gamma$ function, we also see

Therefore

To conclude this section, we also mention

Theorem 4 (Legendre duplication formula).

One can find a proof here.

# Mellin and Fourier Transform of the Jacobi Theta Function

## Behaviour of the Jacobi Theta Function

Put $Z(s)=\pi^{-s/2}\Gamma(s/2)\zeta(s)$. It looks we are pretty close to a great property of $\zeta(s)$, if we can figure out $Z$ a little bit more, because $\pi^{-s/2}$ and $\Gamma(s/2)$ behave nicely. Therefore we introduce the Jacobi theta function

and try to deduce its relation with $Z(s)$.

To begin with, we first show that

Proposition 1.The theta function is holomorphic on the right half plane.

*Proof.* Let $C$ be a compact subset of the right half plane, and put $y_0=\inf_{s \in C}\Re(s)$. Pick any $n_0\ge \frac{1}{y_0}$. For $s=u+iv \in C$, we have $u \ge y_0$ and therefore

Therefore $\theta(s)$ converges absolutely on any compact subset of the right half plane. (Note we have used the fact that $n^2y_0 \ge |n|n_0y_0 \ge |n|$ when we are studying the convergence.) Since each term is holomorphic, we have shown that $\theta(s)$ itself is holomorphic. $\square$

Therefore it is safe to work around theta function. Now we are ready to deduce a functional equation.

Theorem 4.The theta function satisfies the functional equation on $\{\Re(s)>0\}$:

The square root is chosen to be in the branch with positive real part.

*Proof.* Consider the function $f(x)=e^{-\pi x^2}$. We know that this is the fixed point of Fourier transform (in this convenient form)

Now we put $g(x)=e^{-\pi u x^2}=f(\sqrt{u}x)$. The Fourier transform of $g$ is easy to deduce:

Since $g(x)$ is a Schwartz function, by Poisson summation formula, we have

The result follows from an analytic continuation. $\square$

For Schwartz functions, also known as rapidly decreasing functions, we refer the reader to chapter 7 of W. Rudin’s *Functional Analysis*.

Next we will study the behaviour of $\theta(s)$ on the half real line, especially at the origin and infinity. By the functional equation above, once we have a better view around the origin, we can quickly know what will happen at the infinity.

Proposition 2.When the real number $t \to 0$, the theta function is equivalent to $\frac{1}{\sqrt{t}}$. More precisely, when $t$ is small enough, the following inequality holds:

*Proof.* Rewrite $\theta(t)$ in the form

Therefore

Pick $t>0$ small enough so that

It follows that

$\square$

As a result, we also know how $\theta(t)$ behaves at the infinity. To be precise, we have the following corollary.

Corollary 1.The limit of $\theta(t)$ at infinity is $1$ in the following sense: when $t$ is big enough,

*Proof.* Put $u=\frac{1}{t}$. When $u$ is small enough, we have $|\theta(u)-\frac{1}{\sqrt{u}}|<e^{-(\pi-1)/u}$. As a result,

as expected. $\square$

## The Mellin Transform of the Theta Function and the Zeta Function

To begin with, we introduce the Mellin transform. In a manner of speaking, this transform can actually be understood as the multiplicative version of a two-sided Laplace transform.

Definition.Given a function $f:\mathbb{R}_+ \to \mathbb{C}$, the Mellin transform of $f$ is defined to beprovided that the limit exists.

For example, $\Gamma(s)$ is the Mellin transform of $e^x$. Moreover, for the two-side Laplace transform

we actually have

where $\tilde{f}(x)=f(e^{-x})$.

Our goal is to recover $Z(s)$ through the Mellin transform of $\theta(x)$. As we have proved earlier,

It seems we can get our result really quick by studying $\frac{1}{2}(\theta(s)-1)$. However, we see $\theta(x)$ goes to $\frac{1}{\sqrt{x}}$ rapidly as $x \to 0$, and goes to $1$ rapidly as $x \to \infty$. Convergence has to be taken care of. Therefore we add error correction terms. For this reason, we study the function

We use $s/2$ in place of $s$ because we do not want $\zeta$ to be evaluated at $2s$ all the time.

The partition $(0,1) \cup (1,\infty)$ immediately inspires one to use the change-of-variable $y=\frac{1}{x}$. As a result,

Now we are ready to compute $\phi(s)$. For the first part,

On the other hand,

Therefore

Therefore

In particular,

Expanding this equation above, we see

This gives

Finally we try to simplify the quotient above. By Legendre’s duplication formula,

By Euler’s reflection formula,

Inserting these two equations into the right hand side of $\zeta(1-s)$, we obtain

Proposition 3.The Riemann Zeta function $\zeta(s)$ admits an analytic continuation satisfying the functional equation

In particular, since we also have

it is immediate that $\zeta(s)$ admits a simple hole at $s=1$ with residue $1$. Another concern is $s=0$. Nevertheless, since we have

there is no pole at $s=0$ (notice that $\phi(s)$ is entire). We now know a little bit more about the analyticity of $\zeta(s)$.

Corollary 2.The Riemann zeta function $\zeta(s)$ has its analytic continuation defined on $\mathbb{C} \setminus \{1\}$, with a simple pole at $s=1$ with residue $1$.

# What is 1+2+…?

Now we are safe to compute $\zeta(-1)$.

But I believe, after these long computations of the analytical continuation, we can be confident enough to say that, when $\Re(s) \le 1$, the Riemann zeta function $\zeta(s)$ absolutely cannot be immediately explained by its ordinary definition $\sum_{n=1}^{\infty}n^{-s}$. Claiming $1+2+\dots=-\frac{1}{12}$ is a ridiculous and unacceptable abuse of language.

We conclude this post with a criticism by Greg Gbur on the infamous Numberphile video.

So why is this important? Part of what I’ve tried to show on this blog is that mathematics and physics can be extremely non-intuitive, even bizarre, but that they have their own rules and logic that make perfect sense once you get familiar with them. The original video, in my opinion, acts more like a magic trick than an explanation: it shows a peculiar, non-intuitive result and tries to pass it off as absolute truth without qualification. Making science and math look like incomprehensible magic does not do any favors for the scientists who study it nor for the public who would like to understand it.

# References / Further Reading

- Serge Lang,
*Complex Analysis.* - Elias M. Stein & Rami Shakarchi,
*Complex Analysis.* - Jürgen Neukirch,
*Algebraic Number Theory.* - Jakob Glas & Kevin Yeh,
*The Classical Theta Function and the Riemann Zeta Function.*

A Step-by-step of the Analytic Continuation of the Riemann Zeta Function