Prime Numbers and the Riemann Zeta Function

Lots of people know that the Riemann Hypothesis has something to do with prime numbers, but most introductions fail to say what or why. I’ll try to give one angle of explanation.

Layman’s Terms

Suppose you have a bunch of friends, each with an instrument that plays at a frequency equal to the imaginary part of a zero of the Riemann zeta function. If the Riemann Hypothesis holds, you can create a song that sounds exactly at the prime-powered beats, by simply telling all your friends to play at the same volume.

Mathematical Terms

Let $ \pi(x) $ denote the number of primes less than or equal to x. Recall Gauss’s approximation: $ \pi(x) \approx \int\_2\^x \frac{1}{\log t} \,dt $ (aka, the “probability that a number n is prime” is approximately $ \frac{1}{\log n} $).

Riemann improved on Gauss’s approximation by discovering an exact formula $ P(x) = A(x) - E(x) $ for counting the primes, where

  • $ P(x) = \sum\_{p\^k < x} \frac{1}{k} $ performs a weighted count of the prime powers less than or equal to x. [Think of this as a generalization of the prime counting function.]
  • $ A(x) = \int\_0\^x \frac{1}{\log t} \,dt+ \int\_x\^{\infty} \frac{1}{t(t\^2 -1) \log t} \,dt $ $ - \log 2 $ is a kind of generalization of Gauss’s approximation.
  • $ E(x) = \sum\_{z : \zeta(z) = 0} \int\_0\^{x\^z} \frac{1}{\log t} \,dt $ is an error-correcting factor that depends on the zeroes of the Riemann zeta function.

In other words, if we use a simple Gauss-like approximation to the distribution of the primes, the zeroes of the Riemann zeta function sweep up after our errors.

Let’s dig a little deeper. Instead of using Riemann’s formula, I’m going to use an equivalent version

$$ \psi(x) = (x + \sum\_{n = 1}\^{\infty} \frac{x\^{-2n}}{2n} - \log 2\pi) - \sum\_{z : \zeta(z) = 0} \frac{x\^z}{z} $$

where $ \psi(x) = \sum\_{p\^k \le x} \log p $. Envisioning this formula to be in the same $P(x) = A(x) - E(x)$ form as above, this time where

  • $ P(x) = \psi(x) = \sum\_{p\^k \le x} \log p $ is another kind of count of the primes.
  • $ A(x) = x + \sum\_{n = 1}\^{\infty} \frac{x\^{-2n}}{2n} - \log 2\pi $ is another kind of approximation to $P(x)$.
  • $ E(x) = \sum\_{z : \zeta(z) = 0} \frac{x\^z}{z} $ is another error-correction factor that depends on the zeroes of the Riemann zeta function.

we can again interpret it as an error-correcting formula for counting the primes.

Now since $ \psi(x) $ is a step function that jumps at the prime powers, its derivative $ \psi’(x) $ has spikes at the prime powers and is zero everywhere else. So consider

$$ \psi’(x) = 1 - \frac{1}{x(x\^2 - 1)} - \sum\_z x\^{z-1} $$

It’s well-known that the zeroes of the Riemann zeta function are symmetric about the real axis, so the (non-trivial) zeroes come in conjugate pairs $ z, \bar{z} $. But $ x\^{z-1} + x\^{\bar{z} - 1} $ is just a wave whose amplitude depends on the real part of z and whose frequency depends on the imaginary part (i.e., if $ z = a + bi $, then $ x\^{z-1} + x\^{\bar{z}-1} = 2x\^{a-1} cos (b \log x) $), which means $ \psi’(x) $ can be decomposed into a sum of zeta-zero waves. Note that because of the $2x\^{a-1}$ term in front, the amplitude of these waves depends only on the real part $a$ of the conjugate zeroes.

For example, here are plots of $ \psi’(x) $ using 10, 50, and 200 pairs of zeroes:

10 Pairs

50 Pairs

50 Pairs

So when the Riemann Hypothesis says that all the non-trivial zeroes have real part 1/2, it’s hypothesizing that the non-trivial zeta-zero waves have equal amplitude, i.e., they make equal contributions to counting the primes.

In Fourier-poetic terms, when Flying Spaghetti Monster composed the music of the primes, he built the notes out of the zeroes of the Riemann zeta function. If the Riemann Hypothesis holds, he made all the non-trivial notes equally loud.

Edwin Chen

Building human/AI infrastructure at Surge.

Need obsessively high-quality human-labeled data? Interested in a self-serve data labeling platform? Just reach out! We help top companies create massive datasets to train and measure their AI.

Former AI & engineering lead at Google, Facebook, Twitter, and Dropbox. Pure math and linguistics research at MIT.


Recent Posts

Surge: Data Labeling You Can Trust

Exploring LSTMs

Moving Beyond CTR: Better Recommendations Through Human Evaluation

Propensity Modeling, Causal Inference, and Discovering Drivers of Growth

Product Insights for Airbnb

Improving Twitter Search with Real-Time Human Computation

Edge Prediction in a Social Graph: My Solution to Facebook's User Recommendation Contest on Kaggle

Soda vs. Pop with Twitter

Infinite Mixture Models with Nonparametric Bayes and the Dirichlet Process

Instant Interactive Visualization with d3 + ggplot2

Movie Recommendations and More via MapReduce and Scalding

Quick Introduction to ggplot2

Introduction to Conditional Random Fields

Winning the Netflix Prize: A Summary

Stuff Harvard People Like