## From the drawers of the museumDecember 12, 2013

Posted by Noah Snyder in fusion categories, quantum groups, subfactors, Uncategorized.

One of my amateur interests is paleontology. Paleontologists looking for new examples have two options: go out in the field and dig up a new example, or go looking through drawers of museums to find old examples that had been overlooked. In this blog post I want to give an interesting example of the latter kind of research being useful in mathematics. Namely in discussions with Zhengwei Liu, we realized that an old example of Ocneanu’s gives an answer to a question that was thought to be open.

One of the central problems in fusion categories is to determine to what extent fusion categories can be classified in terms of finite groups and quantum groups (perhaps combined in strange ways) or whether there are exceptional fusion categories which cannot be so classified. My money is on the latter, and in particular I think extended Haagerup gives an exotic fusion category. However, there are a number of examples which seem to involve finite groups, but where we don’t know how to classify them in terms of group theoretic data. For example, the Haagerup fusion category has a 3-fold symmetry and may be built from $\mathbb{Z}/3\mathbb{Z}$ or $S_3$ (as suggested by Evans-Gannon). The simplest examples of these kind of “close to group” categories, are called “near-group categories” which have only one non-invertible object and have the fusion rules

$X^2 \cong X^{\oplus n} + \sum_g g$

for some group of invertible objects $g$. A result of Evans-Gannon (independently proved by Izumi in slightly more generality), says that outside of a reasonably well understood case (where $n = \#G -1$ and the category is described by group theoretic data), we have that $n$ must be a multiple of $\# G$. There are the Tambara-Yamagami categories where $n = 0$, and many examples (E6, examples of Izumi, many examples of Evans-Gannon) where $n = \#G$

Here’s the question: Are there examples where n is larger than $\# G$?

It turns out the answer is yes! In fact the answer is given by the $0$-graded part of the quantum subgroup $E_9$ of quantum $SU(3)$ from Ocneanu’s tables here. I’ll explain why below.

## The Hoffman-Singleton graph and groupoidsOctober 16, 2013

Posted by David Speyer in Uncategorized.

The Hoffman-Singleton graph is the unique graph on $50$ vertices with the following property: Every vertex is of degree $7$ and, between any two vertices, there is either an edge or a path of length two, but not both. The Hoffman-Singleton graph has a large symmetry group — order $252,000$ — and there are many ways to describe it that emphasize different symmetry properties. Various constructions describe it in terms of the geometry of the affine plane $\mathbb{F}_5^2$, the projective space $\mathbb{P}^3(\mathbb{F}_2)$ or just pure combinatorics. Here is one more that I noticed the other day when reading through the original Hoffman-Singleton paper. While turning it into a blogpost, I noticed that the same observation was made by Markus Junker in 2005.

## The quest for narrow admissible tuplesJuly 2, 2013

Posted by Scott Morrison in polymath.
Tags: , ,

(A guest post by Andrew Sutherland.)

With more than 400 comments tacked on to the previous blog post, it’s past time to rollover to a new one. As just punishment for having contributed more than my fair share of those comments, Scott has asked me to write a guest post summarizing the current state of affairs. This task is made easier by Tao’s recent progress report on the polymath project to sharpen Zhang’s result on bounded gaps between primes. If you haven’t already read the progress report I encourage you to do so, but for the benefit of newcomers who would like to understand how our quest for narrow admissible tuples fits in the bounded prime gaps polymath project, here goes.

The Hardy-Littlewood prime tuples conjecture states that every admissible tuple has infinitely many translates that consist entirely of primes. Here a tuple is simply a set of integers, which we view as an increasing sequence $t_1 < t_2 < \ldots < t_k$; we refer to a tuple of size $k$ as a $k$-tuple. A tuple is admissible if it does not contain a complete set of residues modulo any prime $p$. For example, 0,2,4 is not an admissible 3-tuple, but both 0,2,6 and 0,4,6 are. A translate of a tuple is obtained by adding a fixed integer to each element; the sequences 5,7,11 and 11,13,17 are the first two translates of 0,2,6 that consist entirely of primes, and we expect that there are infinitely more. Admissibility is clearly a necessary condition for a tuple to have infinitely many translates made up of primes; the conjecture is that it is also sufficient.

Zhang proved a weakened form the prime tuples conjecture, namely, that for all sufficiently large $k\ge k_0$, every admissible $k$-tuple has infinitely many translates that contain at least 2 primes (as opposed to $k$). He made this result explicit by showing that one may take $k_0=3,500,000$, and then noted the existence of an admissible $k_0$-tuple with diameter (difference of largest and smallest elements) less than 70,000,000. Zhang’s $k_0$-tuple consists of the first $k_0$ primes greater than $k_0$, which is clearly admissible. As observed by Trudgian, the diameter of this $k_0$-tuple is actually less than 60,000,000 (it is precisely 59,874,954).

Further improvements to Zhang’s bound came rapidly, first by finding narrower admissible $k_0$-tuples, then by optimizing $k_0$ and the critical parameter $\varpi$ on which it depends (this means making $\varpi$ larger; $k_0$ is proportional to $\varpi^{-3/2}$). Since it began on June 4, the polymath8 project has been working along three main lines of attack: (1) improving bounds on $\varpi$ and a related parameter $\delta$, (2) deriving smaller values of $k_0$ from a given pair $(\varpi, \delta)$, and (3) the search for narrow admissible $k_0$-tuples.You can see the steady progress that has been made on these three interlocking fronts by viewing the list of world records.

A brief perusal of this list makes it clear that, other than some quick initial advances made by tightening obvious slack in Zhang’s bounds, most of the big gains have come from improving the bounds on $\varpi$ (edit: as pointed out by v08ltu below, reducing the dependence of $k_0$ on $\varpi$ from $\varpi^{-2}$ to $\varpi^{-3/2}$ was also a major advance); see Tao’s progress report and related blog posts for a summary of this work. Once new values of $\varpi$ and $\delta$ have been established, it is now relatively straight-forward to derive an optimal $k_0$ (at least within 1 or 2; the introduction of Pintz’s method has streamlined this process). There then remains the task of finding admissible $k_0$-tuples that are as narrow as possible; it is this last step that is the subject of this blog post and the two that preceded it. Our goal is to compute $H(k_0)$, the smallest possible diameter of an admissible $k_0$-tuple, or at least to obtain bounds (particularly upper bounds) that are as tight as we can make them.

A general way to construct a narrow admissible $k_0$-tuple is to suppose that we first sieve the integers of one residue class modulo each prime $p\le k_0$ and then choose a set of $k_0$ survivors, preferably ones that are as close together as possible. In fact, it is usually not necessary to sieve a residue class for every prime $p\le k_0$ in order to obtain an admissible $k_0$-tuple, asymptotically an $O(k_0/\log k_0)$ bound should suffice. The exact number of residue classes that require sieving depends not only on $k_0$, but also on the interval in which one looks for survivors (it could also depend on the order in which one sieves residue classes, but we will ignore this issue).

All of the initial methods we considered involved sieving residue classes 0 mod $p$, and varied only in where to look for the survivors. Zhang takes the first $k_0$ survivors greater than 1 (after sieving modulo primes up to $k_0$), and Morrison’s early optimizations effectively did the same, but with a lower sieving bound. The Hensley-Richards approach instead selects survivors from an interval centered at the origin, and the asymmetric Hensley-Richards optimization shifts this interval slightly (see our wiki page for precise descriptions of each of these approaches, along with benchmark results for particular $k_0$ values of interest).

But there are sound practical reasons for not always sieving 0 mod $p$. Assuming we believe the prime tuples conjecture (which we do!), we can certainly find an optimally narrow admissible $k_0$-tuple somewhere among the primes greater than $k_0$, all of which survive sieving 0 modulo primes $p\le k_0$. However, the quantitative form of the prime tuples conjecture tells us roughly how far we might need to search in order to find one. The answer is depressingly large: the expected number of translates of any particular admissible $k_0$-tuple to be found among the primes $p\le x$ is $O(x/\log^{k_0}x),$ thus we may need to search through the primes in an interval of size exponential in $k_0$ in order to have a good chance of finding even one translate of the $k_0$-tuple we seek.

Schinzel suggested that it would be better to sieve 1 mod 2 rather than 0 mod 2, and more generally to sieve 1 mod $p$ for all primes up to some intermediate bound and then switch to sieving 0 mod $p$ for the remaining primes. We find that simply following Schinzel’s initial suggestion, works best, and one can see the improvement this yields on the benchmarks page (unlike Schinzel, we don’t restrict ourselves to picking the first $k_0$ survivors to the right of the origin, we may shift the interval to obtain a better bound).

But sieving a fixed set of residue classes is still too restrictive. In order to find narrower admissible tuples we must relax this constraint and instead consider a greedy approach, where we start by picking an interval in which we hope to find $k_0$ survivors (we know that the size of this interval should be just slightly larger than $k_0 \log k_0$), and then run through the primes in order, sieving whichever residue class is least occupied by survivors (we can break ties in any way we like, including at random). Unfortunately a purely greedy approach does not work very well. What works much better is to start with a Schinzel sieve, sieving 1 mod 2 and 0 mod primes up to a bound slightly smaller than $\sqrt{k_0\log k_0}$, and then start making greedy choices. Initially the greedy choice will tend to be the residue class 0 mod $p$, but it will deviate as the primes get larger. For best results the choice of interval is based on the success of the greedy sieving.

This is known as the “greedy-greedy” algorithm, and while it may not have a particularly well chosen name (this is the downside of doing math in a blog comment thread, you tend to throw out the first thing that comes to mind and then get stuck with it), it performs remarkably well. For the values of $k_0$ listed in the benchmarks table, the output of the greedy-greedy algorithm is within 1 percent of the best results known, even for $k_0 = 342$, where the optimal value is known.

Recently, improvements in the bounds on $\varpi$ brought $k_0$ below 4507, and we entered a regime where good bounds on $H(k)$ are already known, thanks to prior work by Engelsma. His work was motivated by the second Hardy-Littlewood conjecture, which claims that $\pi(x+y)-\pi(x)\le \pi(y)$ for all $x,y\ge 2$, a claim that Hensley and Richards showed is asymptotically incompatible with the prime tuples conjecture (and now generally believed to be false). Engelsma was able to find an admissible 447-tuple with diameter 3158, implying that if the prime tuples conjecture holds then there exists an $x$ (infinitely many in fact) for which $\pi(x+3159)-\pi(x)=447$, which is greater than $\pi(3159) = 446$. In the process of obtaining this result, Engelsma spent several years doing extensive computations, and obtained provably optimal bounds on $H(k)$ for all $k\le 342$, as well as upper bounds on $H(k)$ for $k \le 4507$. The quality of these upper bounds is better in some regions than in others (Engelsma naturally focused on the areas that were most directly related to his research), but they are generally quite good, and for $k$ up to about 700 believed to be the best possible.

We have now merged our results with those of Engelsma and placed them in an online database of narrow admissible $k$-tuples that currently holds records for all $k$ up to 5000. The database is also accepts submissions of better admissible tuples, and we invite anyone and everyone to try and improve it. Since it went online a week ago it has processed over 1000 submissions, and currently holds tuples that improve Engelsma’s bounds at 1927 values of $k$, the smallest of which is 785. As I write this $k_0$ stands at $873$ (subject to confirmation), which happens to be one of the places where we have made an improvement, yielding a current prime gap bound of 6,712 (but I expect this may drop again soon).

In addition to supporting the polymath prime gaps project, we expect this database will have other applications, including further investigations of the second Hardy-Littlewood conjecture. As can be seen in this chart, not only have we found many examples of admissible $k$-tuples whose diameter $d$ satisfies $k > \pi (d+1)$, one can view the growth rate of the implied lower bounds on $k - \pi(H(k)+1)$ in this chart.

## More narrow admissible setsJune 5, 2013

Posted by Scott Morrison in polymath.
Tags: ,

It looks like it may be time to roll over the search for narrow admissible sets to a new blog post, as we’re approaching 100 comments on the original thread.

In the meantime, an official polymath8 project has started. The wiki page is a good place to get started. Work to understand and improve the bounds in Zhang’s result on prime gaps has split into three main areas.

1) A reading seminar on Zhang’s Theorem 2.
2) A discussion on sieve theory, bridging the gap begin Zhang’s Theorem 2 and the parameter k_0 (see also the follow-up post).
3) Efforts to find narrow admissible sets of a given cardinality k_0 — the width of the narrowest set we find gives the current best bound on prime gaps.

We started on 3) in the previous blog post, and now will continue here. I’ll try to summarize the situation.

Just recently there’s been a significant improvement in $k_0$, the desired cardinality of the admissible set, and we’re now looking at $k_0 = 34,429$. Hopefully there’s going to be a whole new round of techniques, made possible by the significantly smaller problem size.

As I write this, the narrowest admissible set of size 34,429 found so far, due to Andrew Sutherland, has width 388,118.

This was found using the “greedy-greedy” algorithm. This starts with some chosen interval of integers, in this case [-185662,202456], and then sieves as follows. First discard 1 mod 2, and then 0 mod p for $p \leq b$, for some parameter b. (I’m not actually sure of the value of this parameter in Andrew’s best set.) After that, for each prime we pick a minimally occupied residue class, and sieve that out. Assuming we picked a sufficiently wide interval to begin with, when we’re done the resulting admissible set with still have at least $k_0$ elements.

More generally, there are several directions worth pursuing

1. sharpening bounds on $\rho^*(x)$, the maximal cardinality of an admissible set of width at most $x$,
2. finding new constructions of admissible sets of a given size (and also ‘almost-admissible’ sequences)
3. developing algorithms or search techniques to find narrow admissible sets, perhaps starting from a wider or smaller admissible set, or starting from an ‘almost-admissible’ set.

(If these questions carry us in different directions, there’s always more room on the internet!)

For sufficiently small sizes (at most 372), everything is completely understood due to exhaustive computer searches described at http://www.opertech.com/primes/k-tuples.html. At least for now, we need to look at much larger sizes, so obtaining bounds and finding probabilistic methods is probably the right approach.

I’m writing this on a bus, beginning 30 hours of travel. (To be followed by a short sleep then an intense 3 day conference!) So my apologies if I missed something important!

## I just can’t resist: there are infinitely many pairs of primes at most 59470640 apartMay 30, 2013

Posted by Scott Morrison in Number theory, polymath.
Tags: , , ,

Everyone by now has heard about Zhang’s landmark result showing that there are infinitely many pairs of primes at most 70000000 apart.

His core result is that if a set of 3.5 * 10^6 (corrected, thanks to comment #2) numbers $H$ is admissible (see below), then there are infinitely many $n$ so that $n+H$ contains at least two primes. He then easily constructs an admissible set wherein the largest difference is 7 * 10^7, obtaining the stated result.

A set $H$ is admissible if there is no prime $p$ so $H \pmod p$ occupies every residue class. For a given $H$ this is clearly a checkable condition; there’s no need to look at primes larger than $|H|$.

(While Zhang went for a nice round number, Mark Lewko found his argument in fact gives 63374611, if you’re being overly specific about these things, which we are right now. :-)

In a short note on the arXiv yesterday, Tim Trudgian (whose office is not far from mine) pointed out another way to build an admissible set, giving a smaller largest difference, obtaining the result that there are infinitely many pairs of primes at most 59874594 apart. He considers sets of the form $H_m = {p_{m+1}, \ldots, p_{m+k_0}}$ (where $k_0$ is Zhang’s constant 3.5 * 10^7). These aren’t necessarily admissible, but they are for some values of $m$, and both Zhang and Tim noticed certain values for which this is easy to prove. Zhang used $H_m$ with $m=k_0$, while Tim’s observation is that $m_0=250150=\pi(k_0)$ also works. (Comment #2 below points out this isn’t right, and Zhang also intended $m=\pi(k_0)$, and the slack in his estimate is coming from just looking at the largest element of $H_m$, rather than the largest difference.) Thus the bound in his result is $p_{m_0+k_0}-p_{m_0+1} = 59874594$.

It turns out that checking admissibility for a given $H_m$ isn’t that hard; it takes about an hour to check a single value for $m ~ m_0$ (but if you find a prime witnessing $H_m$ not being admissible, it very often gives you a fast proof that $H_{m+1}$ is not admissible either, so searching is much faster).

I haven’t looked exhaustively, but one can check that $m_1 = m_0 / 2 = 125075$ gives an admissible $H_m$, and hence there are infinitely many pairs of primes at most $p_{m_1 + k_0} - p_{m_1 + 1} = 59470640$. (Sadly, it’s impossible to get below 59 million with this trick; no $m$ below 27000 works; all witnessed by $p=182887$ or $378071$.)

I just couldn’t resist momentarily “claiming the crown” for the smallest upper bound on gap size. :-) Of course the actual progress, that’s surely coming soon from people who actually understand Zhang’s work, is going to be in reducing his 3.5 * 10^6. You can read more about prospects for that in the answers to this MathOverflow question.

## University of Melbourne hiringMay 26, 2013

Posted by Scott Morrison in jobs.
Tags:

Often Australian jobs don’t make it on to MathJobs.org, for various reasons, so I thought I’d help distribute this one — Melbourne is hiring, initially a full professor, and subsequently several tenure track assistant professor positions, all in pure mathematics.

Below the fold, Arun Ram’s message about this. Australia is a nice place to come and work!

## Conference videosApril 30, 2013

Posted by Ben Webster in Uncategorized.

Well, from my perspective at least, the conference was a success.  We all made it through in one piece, and no one got trapped on the subway. If any of you are looking for the videos of the talks, they can be downloaded from this page. That’s a only a temporary hosting solution, but at least they’re available for the moment.

## More on shameless promotionApril 23, 2013

Posted by Ben Webster in Uncategorized.

As those of you who’ve scrolled down the page know, the conference I mentioned a few months ago (now sadly memorializing the life of Andrei Zelevinsky) is starting tomorrow. Of course, for those of you who don’t live in the Boston area, coming to conference isn’t an option unless you were already traveling today, but I do have a (somewhat belated) announcement. Assuming that the AV gods are kind and everything goes as planned, it should be possible to watch the talks live (of course, we’ll also make the videos available after the conference, in case you’re busy). The schedule is here; the talks start at 10am tomorrow.

## New open access journal in algebraic geometryMarch 3, 2013

Posted by David Speyer in Uncategorized.

I just received an e-mail announcing that Compositio has launched an Open Access journal entitled Algebraic Geometry. Their website is live and promises “Open access implies here that the electronic version of the journal is freely accessible and that there are no article processing charges for authors whatsoever. The printed version of the journal will be available at the end of the calendar year against printing costs.”

The editorial board looks great, including L. Caporaso, J. Ellenberg, D. Maulik and R. Pandharipande. They will definitely get my next algebraic geometry paper.

This is really good news. It’s seemed clear from the debates on journals of the last year that what is needed is for people and institutions of high reputation to commit to running open journals. Compositio, and the editors they have found, are top of the line. From a selfish perspective, what makes me really happy is that I didn’t wind up on the editorial board.

Good work, and good luck, to Algebraic Geometry.

## UK Parliament seeking feedback on Open Access supportJanuary 15, 2013

Posted by David Speyer in Uncategorized.