This is a followup to my post What if primes hated to start with nine? and you might want to read that post for context. This post describes my failure to create a nice illustration of the effect of the zero of at on the distribution of the primes. Of course, the answer might be really dumb — I might just have a coding error, or the range of primes I was using (around ) may not be large enough. But I suspect there is something interesting here, and I’d like to know what. The basic question of this post is “Why does the left hand graph look as random as the right hand one?”

**UPDATE** I have a prettier picture!

Let be the number of primes between and . Let , the error in the PNT estimate. Let . The idea of is to give a reasonably normalized version of the error, which will be comparable for intervals in very different domains. The red data series below is the values of — in other words, checking whether five digit primes prefer certain starting digits over others. The green and blue series are the same thing for seven digit and nine digit primes. Notice that, even though we are comparing intervals of very different sizes, is staying nicely contained between and . So I think I found a reasonable normalization.

Note also that there is no sign that primes hate to start with nine.

Let ; the imaginary part of the first zero. As I understand it, the fact that is should mean that the fractional part of should NOT be evenly distributed. In other words, the number of primes in the interval should behave differently from the number of points in .

I decided to test this by computing as ranged from to for several smallish values of . We have so the intervals in question have width ; I figured that should be wide enough bins that the zeta zeroes would dominate any short term effects. Here are the results for ; the colors are assigned in rainbow order starting with red.

In case you are trying to convince yourself that there really is a valid pattern in this data, here is the analogous computation with replaced by , which is nowhere near a zero. There are two interesting things to note: First, the data is different, so I am working in a range large enough to see the difference between and . But, second, the data looks equally random.

What am I missing?

## Starting to see a signal

As I discuss further in a comment below, I realized that part of the problem is that the contributions of the different zeta zeroes are too close to each other to separate out in this way. If you look at the waveform of a chime, you’ll see that it looks vaguely sinusoidal, because the dominant overtone is so much stronger than the others. If you try the same thing for a big clattery noise, you won’t see anything, even if the spectrum is discrete so that, in theory, there is a dominant overtone. The primes are more like the latter case.

This analogy suggested the numerical experiment I should do — to take an average over many frequency cycles. My picture shows (in blue) the total of as runs from to — a range which means that I am looking at primes between and . In other words, I am keeping track of whether the “first digit” of , in base , prefers one value over another. The red points are running through cycles with replaced by . Since , I go through fewer primes when tracking this higher frequency signal — this chart goes form up to .

Recall that tends to have magnitude about . Notice that the blue signal is of size , like the sum of roughly correlated steps while the red signal is of size , like the sum of a random walk of length .

This picture is perhaps a little too nice. I decided to redo the computation changing nothing but the ranges of primes being used. In the following picture, I am going from to .

(Note that this will shift which point on the -axis is called zero. In the later picture, points at the ends of the interval contribute positively to , points in the middle contribute negatively.) I again see that the blue signal is larger than the red signal, but I can no longer pretend that the blue signal looks particularly smooth; indeed, the red signal looks smoother. Still, there is definitely something there, even if I can’t get enough primes to see it very clearly.

In case this is a stupid coding error, and someone feels like reading my Mathematica, the image was produced with:

`pi[a_, b_] := PrimePi[b] - PrimePi[a]`

piErr[a_, b_] := pi[a, b] - NIntegrate[1/Log[t], {t, a, b}]

ErrNorm[a_, b_] := piErr[a, b]/Sqrt[pi[a, b]]

rho = 14.134725

bar[k_] :=

Table[ErrNorm[10^9 *E^(2 Pi (k + j 0.1)/rho),

10^9 * E^(2 Pi (k + (j + 1) 0.1)/rho)], {j, 0, 9}]

`ListPlot[Table[bar[k], {k, 0, 5}],`

PlotStyle -> {{Red, PointSize[0.015]}, Orange, Yellow, Green, Blue, Purple}]

The command is called bar because an earlier version of it was called foo.

The von Mangoldt explicit formula connecting zeroes and primes reads as follows:

where the are negligible errors in practice. Thus, the effect of a zero with real part 1/2 on the number of primes up to x is basically of size , which is about the same size as that of random noise (in fact, it is even a little bit smaller). So the effect of these zeroes are quite hard to see. Their effect can be felt eventually, but it takes an enormous amount of time before it becomes visible: see for instance the article on Skewes’ number.

To get a pretty picture in a reasonable amount of time, you would need a zero with real part larger than 1/2, but unfortunately (or fortunately) I don’t have one of these to offer you…

Thanks! That’s both informative and sad. So, on the one hand, the normalized PNT error has a nice Fourier series with discrete frequencies but, on the other hand, the contribution of any individual term in that series is overwhelmed by the remainder. It doesn’t seem to be a problem for writing papers, but it makes producing nice illustrations for blogposts difficult.

So, another question. Most of us have seen pictures like slide 24 of this talk where we take sums over the first bunch of Riemmann zeroes (the first 100 zeroes in this case) and see a good approximation to (or , or ) emerging. I had assumed that there was some sort of symmetry in truncating Fourier transforms — if the first hundred zeroes gave a good approximation to , then the first billion values would give a good approximation to . Is there anything more to say here than “nope, you were wrong”?

Oh, I think I see. Imagine a big jangly noise, like a one year old pulling all the pans out of the cupboard and dumping them on the floor. One could take a Fourier transform of that sound and get some signal.

Now, one could time-truncate the sound, like only listening for a second, and take the resulting Fourier transform, and the result would be pretty close. That’s like computing using only the first billion primes.

And one could frequency truncate the signal and listen to the resulting sound, like listening to the clatter over a cheap cell phone. The result will be a pretty good approximation from the sound.

But what one can’t do is look at a printout out of the amplitude of the soundwave and hope to see the dominant overtone, which is what I was trying to do, because the neighboring overtones are too similar in frequency and amplitude. One can only do that when the sound is closer to a pure tone — like ringing a bell — and that’s not what the primes sound like.

Reasonable analogy?