Skip to main content

Notes on complex analysis

Section 3.2 Infinite products and the zeta function

Subsection 3.2.1 The zeta function

Definition 3.2.1. Riemann zeta function.

Let \(s\) be a real number. Define the Riemann zeta function by
\begin{equation*} \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}. \end{equation*}
This is essentially taking the “\(p\)-series” from calculus and turning it into a function using the power as a parameter. It isn't hard to see that this function will converge for real \(s > 1\) via the integral test (exercise).
First, note that for a positive real number \(n\) and a complex power \(s = a + ib\text{,}\) note that
\begin{align*} \abs{n^{s}} \amp= \abs{n^{a+ib}}\\ \amp= n^a \abs{n^{ib}}\\ \amp = n^{\RE s} \end{align*}
For \(\eps \gt 0\text{,}\) let \(s\) be a complex number with \(\RE s > 1+\ep\text{.}\) Consider the sequence of functions
\begin{equation*} f_n = \frac{1}{n^s}. \end{equation*}
We have
  1. \begin{equation*} \abs{f_n(s)} = \abs{\frac{1}{n^s}} = \frac{1}{n^{\RE s}} \leq \frac{1}{n^{1 + \eps}} =: M_n. \end{equation*}
  2. \begin{equation*} \sum_{n=1}^\infty M_n = \sum_{n=1}^\infty \frac{1}{n^{1+\eps}} \lt \infty. \end{equation*}
By the Weierstrass \(M\)-test Theorem 3.1.4, the series converges uniformly and absolutely to \(\zeta\) where \(\RE s > 1 + \eps\text{.}\)
Now note that each \(f_n = \frac{1}{n^s}\) is analytic, and so each partial sum
\begin{equation*} F_k = \sum_{n=1}^k \frac{1}{n^s} \end{equation*}
is analytic. So \(F_k\) is a sequence of analytic functions converging uniformly to \(\zeta\) on \(\RE s > 1 + \ep\text{.}\) Then by Theorem 3.1.3, we conclude that \(\zeta\) is analytic on \(\RE s > 1+\ep\text{.}\) Because \(\ep\) was arbitrary, the results extend to all points in the halfplane \(\RE s > 1\text{.}\)
In fact, we can say slightly more almost immediately. First, let's codify the result that we used in the proof of the theorem above.
The first thing we might like to do is ask if we know the value of \(\zeta\) for any input points at all. The famous problem identified with the case where \(s=2\) is called the Basel problem. Its solution will showcase some of the issues we face when dealing with the task of extending the zeta function off the natural domain.
First, we'll show Euler's (ingenious but rather iffy) argument to highlight points where we need to drive deeper. To begin, Euler knew that
\begin{equation*} \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \ldots, \end{equation*}
and so
\begin{equation*} \frac{\sin x}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \ldots. \end{equation*}
He wants to factor this “polynomial” using the fundamental theorem of algebra as
\begin{equation*} \frac{\sin x}{x} = (x - \pi)(x+\pi)(x - 2\pi)(x + 2\pi)\ldots, \end{equation*}
but he can't, as this has no hope of converging in the infinite product. Instead, he defines terms of the form \((1 \pm \frac{x}{k\pi})\) - after all, these terms vanish at the same set of zeros. Making the rather large leap that the function is determined by its zeros and a single value (as in the case of a polynomial), Euler writes
\begin{align*} \frac{\sin x}{x} \amp= (1 - \frac{x}{\pi})(1 + \frac{x}{\pi})(1 - \frac{x}{2\pi})(1 + \frac{x}{2\pi})\ldots\\ \amp= (1 - \frac{x^2}{\pi^2})(1 - \frac{x^2}{4\pi^2})(1 - \frac{x^2}{9\pi^2})\ldots \end{align*}
which at least has a shot of converging. He then notices that a pattern emerges as you start formally multiplying out this product.
\begin{equation*} (1 - \frac{x^2}{\pi^2})(1 - \frac{x^2}{4\pi^2}) = 1 - (\frac{1}{\pi^2} + \frac{1}{2^2 \pi^2}) x^2 + O(x^4), \end{equation*}
and
\begin{equation*} (1 - \frac{x^2}{\pi^2})(1 - \frac{x^2}{4\pi^2})(1 - \frac{x^2}{9\pi^2}) =1 - (\frac{1}{\pi^2} + \frac{1}{2^2 \pi^2} + \frac{1}{3^2 \pi^2}) x^2 + O(x^4). \end{equation*}
He makes the leap that
\begin{equation*} \prod_{n=1}^\infty (1 - \frac{x^2}{k^2 \pi^2}) = 1 - (\sum_{k=1}^\infty \frac{1}{k^2 \pi^2})x^2 + O(x^4). \end{equation*}
Now, still with the underlying assumption that all of this converges and makes sense, he compares the coefficients of \(x^2\) in his two power series to get
\begin{equation*} - (\sum_{k=1}^\infty \frac{1}{k^2 \pi^2}) = \frac{-1}{3!}, \end{equation*}
which is just
\begin{equation*} \zeta(2) = \sum_{k=1}^\infty \frac{1}{k^2} = \frac{\pi^2}{6}. \end{equation*}
This argument raises serious questions. Among them
  • Is an analytic function determined uniquely by its zeros?
  • Does the fundamental theorem of algebra actually apply to infinite “polynomials”?
  • What does it mean to converge as an infinite product?
The answers to these questions will be answered in the subsequent discussion.

Subsection 3.2.2 Infinite products

Euler's solution to the Basel problem points out that we need to know something about convergence of infinite products. We'll start with real numbers and work up to the complex case. Given a sequence \(a_n\) of real numbers, we can consider the sequence of partial products \(p_k = a_1 a_2 \ldots a_k\text{.}\) Letting the sequence tend to infinity gives an infinite product (in the same way that an infinite sum is just a representation of a sequence of partial sums). The standard symbol used to denote an infinite product on the sequence \(a_n\) is
\begin{equation*} a_1 a_2 \ldots = \prod_{n=1}^\infty a_n\text{.} \end{equation*}
We can likewise denote the \(k\)th partial product by
\begin{equation*} p_k = a_1 a_2 \ldots a_k = \prod_{n=1}^k a_n. \end{equation*}

Definition 3.2.6.

An infinite product with no zero factors is convergent if and only if there exists some real number \(p\neq 0\) so that \(p_k \to p\) as \(k \to \infty\text{.}\) In this case, \(p\) is the value of the product. If such \(p_k \to 0\text{,}\) then \(p_k\) diverges to 0. If there are infinitely many zero factors \(a_n\text{,}\) then \(p_k\) diverges to 0. If there are finitely many zero factors \(a_n\text{,}\) then \(p_k \to 0\text{.}\)
We can construct a theory of convergence entirely in terms of products, but it is more elegant for our needs to convert questions of convergence into equivalent questions about series via the logarithm (assuming positive terms or absolute value).
Apply the logarithm to the product.
\begin{equation*} \ln \prod (1 + a_n) = \sum \ln (1 + a_n). \end{equation*}
The series will converge if and only if the product does. Now note that
\begin{equation*} \frac{\ln(1 + x)}{x} \to 1 \text{ as } x \to 0, \end{equation*}
(by L'Hospital or observation of the power series for \(\ln (1 + x)\)). If \(a_n\) does not tend to zero, the sum, and thus the product, will diverge by the test for divergence. So assume \(a_n \to 0\text{.}\) Then, as
\begin{equation*} \lim_{n \to \infty} \frac{\ln(1+a_n)}{a_n} \to 1, \end{equation*}
the limit comparison test gives that \(\sum \ln(1 + a_n)\) converges if and only if \(\sum a_n\) converges, which establishes the claim.
This result establishes, for example, the convergence of the \(p\)-series-like products \(\prod (1 + \frac{1}{n^p})\) where \(p > 1\text{.}\) That is, infinite products converge when their factors tend quickly to 1.
How do we take this and lift it to infinite products of complex numbers? We'll follow Tao here.
Because \(\sum \abs{a_n - 1}\) converges, the test for divergence gives \(a_n \to 1\text{.}\) Then there exists an \(N\) so that the tail of the sequence is contained in the disk \(D(1, 1/2)\) for \(n > N\text{.}\) Factor the product into
\begin{equation*} \prod_{n=1}^\infty a_n = \prod_{n=1}^N a_n \times \prod_{n = N+1}^\infty a_n. \end{equation*}
Convergence of the whole product will then be equivalent to convergence of \(\prod_{n = N+1}^\infty a_n\text{.}\) Because the tail is away from the origin, we can apply the standard branch of the complex logarithm, \(\Log z = \ln\abs{z} + i \Arg z\text{,}\) on \(D(1, 1/2)\) and write
\begin{equation*} a_n = e^{\Log a_n}. \end{equation*}
Now,
\begin{equation*} \Log \prod_{n=N+1}^\infty a_n = \sum_{n=N+1}^\infty \Log e^{\Log a_n} = \sum_{n=N+1}^\infty \Log a_n. \end{equation*}
Similarly to the argument in the previous proof, note that the power series for \(Log a_n\) is \(O(\abs{a_n-1})\text{,}\) and so by the limit comparison test, \(\sum_{n=N+1}^\infty \Log a_n\) is absolutely convergent. On application of the complex exponential, we conclude that \(\prod_{n=N+1}^\infty a_n\) converges to \(e^{\sum \Log a_n}\text{.}\)
We also have the following product form of the Weierstrass \(M\)-test to measure uniform convergence of products of functions.

Subsection 3.2.3 Product formula for the zeta function

We're now in position to connect the zeta function to the primes by way of the Euler product formula. First, let's recall the fundamental theorem of arithmetic.
For each \(n \in \mathbb N\text{,}\) we have
\begin{equation*} n = \prod_{p} p^{a_p}, \end{equation*}
which we can formally rearrange as
\begin{equation*} \frac{1}{n^s} = \prod_p \frac{1}{p^{a_p s}}. \end{equation*}
Now, we're going to build from the bottom. Given \(x \geq 1, m \geq 0\text{,}\) let \(S_{x,m}\) be the set of natural numbers with prime factorization containing primes no larger than \(x\) and exponents no greater than \(m\text{.}\) That is, \(n \in S_{x,m}\) has a representation
\begin{equation*} n = \prod_{p\leq x} p^{a_p} \end{equation*}
and \(a_p \leq m\text{.}\) The first observation to make is that as \(x, m \to \infty\text{,}\) \(S_{x, m} \to \mathbb N\text{.}\)
The next step is a combinatorial observation. To illustrate, suppose that \(x = 2\) and \(m = 2\text{.}\) Then
\begin{equation*} \sum_{n \in S_{2,2}} \frac{1}{n^s} = 1 + \frac{1}{2^s} + \frac{1}{2^{2s}}. \end{equation*}
Now let \(x = 3\text{,}\) so that we're picking up numbers with \(3\) in the prime factorization. Here's where we see the pattern we're interested in show up.
\begin{align*} \sum_{n \in S_{3, 2}} \frac{1}{n^s} \amp= 1 + \frac{1}{2^s} + \frac{1}{2^{2s}} + \frac{1}{3^s} + \frac{1}{3^{2s}} + \frac{1}{2^s 3^s} + \frac{1}{2^{2s}3^s} + \frac{1}{2^s 3^{2s}} + \frac{1}{2^{2s}3^{2s}}\\ \amp= (1 + \frac{1}{2^s} + \frac{1}{2^{2s}})(1 + \frac{1}{3^s} + \frac{1}{3^{2s}})\\ \amp= \prod_{p\leq 3} \sum_{j = 0}^2 \frac{1}{p^{js}}. \end{align*}
If you see what's happening here, it shouldn't be hard to believe that
\begin{equation*} \sum_{n \in S_{x, m}} \frac{1}{n^s} = \prod_{p \leq x} \sum_{j=0}^m \frac{1}{p^{js}} \end{equation*}
for all \(x \geq 1, m \geq 0\text{.}\)
Now, we'll take a look at what happens to this sum as \(x, m \to \infty\text{.}\) As \(m\to \infty\text{,}\) the first observation to make is that the sums
\begin{equation*} \sum_{j=0}^m \frac{1}{p^{js}} \to \sum_{j=0}^\infty \frac{1}{p^{js}} = \frac{1}{1 - \frac{1}{p^s}} = (1 - \frac{1}{p^s})\inv \end{equation*}
by the geometric series formula. An application of the dominated convergence theorem (when \(\RE s > 1\)) to push the limit into the product (remember that products are essentially sums under the log and that sums are integrals!) gets us
\begin{equation*} \sum_{n \in \mathbb{N}} \frac{1}{n^s} = \lim_{x, m \to \infty} \sum_{n \in S_{x, m}} \frac{1}{n^s} = \prod_{p } (1 - \frac{1}{p^s})\inv. \end{equation*}

Subsection 3.2.4 Properties of the zeta function

We'll now begin looking at some properties of the zeta function. We'll collect some of our results from the earlier discussion here as well.
Facts 1 and 2 were proved in the earlier discussion. Fact 3 follows from the product formula for \(\zeta\text{,}\) as the product is absolutely convergent and has no zero terms . Facts 4 and 5 are more difficult - we'll first look at their implications, and then try to figure out where the functional equation came from.
Let's calculate \(\zeta(-1)\) using the functional equation.
\begin{equation*} \zeta(-1) = 2^{-1} \pi^{-2} \sin \frac{-\pi }{2} \Gamma(2) \zeta(2) =\frac{-1}{2\pi^2} \frac{\pi^2}{6} = \frac{-1}{12}. \end{equation*}
If you want to dive into the chaos that this calculation unleashed a couple of years ago, you can look at this post 1  by Terry Tao or this Mathologer video 2  in response to this Numberphile video 3  on the subject.
Our first discussion is about where the zeros of \(\zeta\) are. (Note that we're abusing notation and using \(\zeta\) to represent the continuation of \(\zeta\) to \(\C\text{.}\)) We can probe this using the functional equation. Remember that \(\Gamma\) is a non-zero function with simple poles at \(0\) and the negative integers. \(\sin \frac{\pi s}{2}\) has simple zeros at all even integers, and we know that \(\zeta\) has only one pole, at \(s=1\text{.}\) So consider the product
\begin{equation*} \sin \frac{\pi s}{2} \Gamma(1-s) \zeta(1-s). \end{equation*}
The poles of \(\Gamma(1-s)\text{,}\) which happen when \(1 - s = 0, -1, -2, \ldots\) or \(s = 1, 2, 3, 4, \ldots\) are being canceled into removable singularities by corresponding zeros in the product, with the exception of \(s = 1\text{.}\) The poles at the even values of \(s\) are being eaten by the simple zeros of \(\sin \frac{\pi s}{2}\text{,}\) which leaves poles at \(s = 3, 5, 7, \ldots\) that must be canceled by \(\zeta(1-s)\text{.}\) But this forces \(\zeta(1-s)\) to have simple zeros at \(s = 3, 5, 7, \ldots\text{,}\) which corresponds to \(\zeta(s)\) possessing simple zeros at \(s = -2, -4, -6, \ldots\text{.}\) These are called the trivial zeros of \(\zeta\text{.}\)
The final subset of \(\C\) to consider the set \(\{s: 0 \leq \RE s \leq 1\}\text{,}\) which is called the critical strip. Here too we analyze the implications of the functional equation. It isn't hard to see that if \(\zeta(s) = 0\) in the critical strip, then so too must \(\zeta(1-s) = 0\) - that is, the zeros in the critical strip are reflected across the line \(\RE s=1/2\text{.}\)
It might not be obvious, but the non-trivial zeros of \(\zeta\) have deep connections to other areas of mathematics. In fact, the main object of inquiry in our seminar, the prime number theorem, is essentially equivalent to a statement about the zeros of \(\zeta\text{.}\) The prime number theorem is a consequence of the fact that \(\zeta\) has no zeros on the line \(\RE s =1\text{.}\) We will prove this on our way to the prime number theorem itself.
This leads us to what is broadly considered the most important unsolved problem in mathematics. Every known zero of \(\zeta\) in the critical strip is actually on the line \(\RE s = 1/2\text{.}\)
Given the connection between the zeros of \(\zeta\) and the prime number theorem, one might expect quite a number of results that would follow from the resolution of the Riemann hypothesis. An outstanding discussion of some consequences can be found in this mathoverflow post 4 .
terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/
www.youtube.com/watch?v=YuIIjLr6vUA
www.youtube.com/watch?v=w-I6XTVZXww
mathoverflow.net/questions/17209/consequences-of-the-riemann-hypothesis