In my experience, students in algebraic geometry tend to pick up the rough idea but remain hazy on the details, most likely because there are many different ways to make these details precise. I decided to try and write down all the basic results I could think of along these lines.

Let be some parameter space such as the space of pairs of two conics. Let be some space of solutions, such as the space of triples where is a point on . Let be a map, such as projection onto the components. We want theorems which will discuss the size of the fibers of , in terms of some global degree of the map .

We work over some field . For simplicity of presentation, we’ll assume that is affine, meaning that it is a subset of defined by polynomial equations

We’ll write for the ring .

It would be silly to ask for any such results if were disconnected. A very basic observation of algebraic geometry is that is connected if and only if has no nontrivial idempotents. In fact, we will ask for something stronger: That is an integral domain. The terminology for this is that is **irreducible**. From now on, we will make:

**Assumption** is irreducible. ( is an integral domain.)

If is also affine, with corresponding ring , then is an module. We define the **degree** of in this case to be the dimension of as a vector space. Degree can be defined in much greater generality; we will feel free to refer to it in greater generality without giving the definition. We will denote the degree of by . Roughly, we want theorems which say that the fibers of have size .

Here is our first result.

**Theorem** (Shafarevich, II.6.3, Theorem 4) If has characteristic zero and is algebraically closed then for almost all in . More precisely, there is some polynomial , not identically zero on , so that implies .

**Warning** This isn’t true if is not algebraically closed: Consider the map from .

**Warning** This isn’t true in characteristic : Consider .

We now want results which let us say something, not just about almost all , but about all .

We will at first focus on counting the size of in a naive sense: We think of as sitting in (or in ) and we literally count points of the fiber. We can’t hope for the fibers to always be of full size because even the nicest map, , has fiber of size , not , over the point . So, using the naive size, we can only hope for upper bounds.

There are two additional problems. The first one is if we have something like projecting onto the coordinate. In this case, the degree is but the fiber over has size . When is affine, with corresponding ring , we can fix this by requiring that is torsion free as an -module. In general, the right condition is that no irreducible component of maps to a proper subvariety of .

More subtly, suppose that is a nodal curve, such as , and is its desingularization. (In this case, the line with as the map .) Then the degree of the map is , but the fiber over is , of size . The hypothesis to rule this out is that is integrally closed in its fraction field. By definition, this is the same as saying that is **normal**.

Once we rule out these possibilities, we have

**Theorem** (Shafarevich, II.6.3, Theorem 3) If is normal, and no irreducible component of maps to a proper subvariety of , then every fiber of has naive size .

I can’t resist mentioning a result which far harder than these:

**Theorem** (A consequence of Zariski’s Main Theorem) Let be normal and let have degree . Assume that no irreducible component of maps to a proper subvariety of . For any in , the number of connected components of is at most $d$.

We now consider counting size in a less naive way. Again, for simplicity, suppose that is affine, with corresponding ring . Let be a point of , so there is a map of rings by . Consider the ring , where acts on by the above map. The maps from this ring to are the point in . Thus, is an upper bound for the number of points of above . We will call this dimension the **scheme theoretic size** of the fiber. Once again, it can be defined when is not affine as well.

We have the following cautionary example: Let mapping onto the coordinate. Then the degree is , but the fiber above has size , either scheme theoretically or naively. To rule this out, we impose that is **finite** over . By definition, this means that is affine, and is a finitely generated module.

You might worry about how we could ever prove that is affine if it is not given to us as a closed subset of . Fortunately, we have:

**Theorem** (Hartshorne, Exercise III.11.2) If is projective with finite fibers, then it is a finite map. Here projective means that is a closed subset of , projecting onto . (This is not the morally right definition of a projective map, but if you are ready for the right definition, then you should be working with “proper” rather than “projective” anyway.)

We then have

**Theorem** (Hartshorne, Exercise II.5.8) If is finite over , and no irreducible component of maps to a proper subvariety of , then every fiber of has scheme theoretic size .

**Theorem** Let be a finite map. Then all fibers have scheme theoretic size if and only if is **flat** over .

Unfortunately, flat is a rather technical condition. The first thing to understand is that some nice looking maps can fail to be flat:

**Warning** Let be , let and let the map be . This is a finite map. (We can alternately describe as .) This map is degree , but the fiber over has scheme theoretic size (and naive size ).

If your eye is well enough trained that this doesn’t look nice to you, try the examples here.

There are two good conditions that imply flatness:

**Theorem** (Hartshorne III.9.7) If is normal and one dimensional, and no irreducible component of maps to a proper subvariety of , then is flat over .

**Theorem** (The miracle flatness theorem) If is Cohen-Macaulay, is smooth of the same dimension as , and is finite, then is flat.

]]>

We still have room for a number more applicants, so we would like to encourage more of you to apply. Please note that the application deadline of March 1 is firm.

]]>

Summary:A number of mathematics journals (e.g. Quantum Topology, Forum of Mathematics Sigma and Pi, and probably many others), are not listed on the new official journal list in Australia. Please, help identify missing journals, and submit feedback via http://jacci.arc.gov.au/.

Every few years the Australian Research Council updates their “official list of journals”. One might wonder why it’s necessary to have such a list, but nevertheless it is there, and it is important that it is accurate because the research outputs of Australian mathematicians are essentially filtered by this list for various purposes.

There is a new draft list out, and the purpose of this post is to coordinate finding missing journals, and to ensure that interested mathematicians submit feedback before the deadline of March 15. Please note that while in the past this list included dubious rankings of journals, the current list is just meant to track all peer reviewed journals in each subject. Having a journal missing entirely means that some published papers will not be counted in measures of a department’s or university’s research output.

You can access the full list here, just journals marked as mathematics here, and just the journals marked a pure mathematics here. These are not the “official” lists, which you have to create an account (follow the instructions at http://www.arc.gov.au/era/current_consult.htm) to view, and even then only an Excel version is available. I hope that by making these mathematics specific lists available in a standard format, more mathematicians will take the time to look over the list.

Please look through the lists. If you see something missing, please comment here so we all know about it. In any case, please submit feedback via http://jacci.arc.gov.au/ (you’ll have to create an account first) recommending inclusion of the journals identified so far. Submitting a missing journal requires identifying an article published in it by an Australia author; feel free to add this information here as well if appropriate. (Thanks to Anthony Henderson for pointing out this detail!)

It is also possible to submit additional “FoR” (field of research) codes for journals on the list, and this may be of interest to people publishing cross-disciplinary research. Feel free to make suggestions along these line here too: the AustMS has been advised that “multiple responses, rather than a single AustMS one, will carry more weight on this aspect”.

]]>

]]>

We intend that these will be 2 year positions, with minimal teaching requirements.

There is an informal description of the jobs at http://tqft.net/web/postdoc, including some information about the grants funding these positions. The official ad is online at http://jobs.anu.edu.au/PositionDetail.aspx?p=3736, and you can find it on MathJobs at http://www.mathjobs.org/jobs/jobs/5678.

Please contact us if you have questions, and please encourage good Ph.D. students (especially with interests in subfactors, fusion categories, categorification, or related subjects) to apply!

]]>

(The colour coded bars show the fractions of papers available on the arXiv, available on authors’ webpages, and not freely accessible at all; these now appear all over the wiki, but unfortunately don’t update automatically. Over at the wiki you can hover over these bars to get the numerical totals, too.)

Thanks everyone for your contributions so far! If you’ve just arrived, check out the tutorial I made on editing the wiki. Now, it’s time to do a little planning.

Here’s one we can start to answer right away.

What fraction of recent papers are available on the arXiv or on authors webpages?For good generalist journals (e.g. Adv. Math. and Annals), almost everything! For subject area journals, there is wide variation (probably mostly depending on traditions in subfields): AGT is almost completely freely accessible, while Discrete Math. is at most half.

I hope we’ll soon be able to say this for many other journals, too.

Here’s the question I really want to have answers for:

Does being freely accessible correlate well with quality?It’s certainly tempting to think so, seeing how accessible Advances and Annals are. I think to really answer this question we’re going to have to classify all the articles in slightly older issues (2010?) and then start looking at the citation counts for articles in the two pools. If we get coverage of more journals, we can also look for the correlation between, say, impact factor and the ratio of freely accessible content.

I don’t want to just list every journal on the wiki; it’s best if editors (and the helpful bots working in the background) can focus attention and enjoy the pleasures of finishing off issues and journals. Suggestions for journals to add next welcome in the comments. I’ve already included the tables of contents for the Journal of Number Theory, and the Journal of Functional Analysis. (It will be nice to be able to make comparisons between JFA and GAFA, I think.)

I’ve been working with some people on automating the entry of data in the wiki (mainly by using arXiv metadata; there are actually way more articles there with journal references and DOIs than I’d expected). Hopefully this will make the wiki editing experience more fun, as a lot of the work will have already been done, and humans just get to handle the hard and interesting cases.

]]>

(For the impatient, go visit http://tqft.net/mlp, or for the really impatient http://tqft.net/mlp/wiki/Adv._Math./232_(2013).)

It would be nice to know how much of the mathematical literature is freely accessible. Here by ‘freely accessible’ I mean “there is a URL which, in any browser anywhere in the world, resolves to the contents of the article”. (And my intention throughout is that this article is legitimately hosted, either on the arxiv, on an institutional repository, or on an author’s webpage, but I don’t care how the article is actually licensed.) I think it’s going to be okay to not worry too much about discrepancies between the published version and a freely accessible version — we’re all grown ups and understand that these things happen. Perhaps a short comment field, containing for example “minor differences from the published version” could be provided when necessary.

This post outlines an idea to achieve this, via a human editable database containing the tables of contents of journals, and links, where available, to a freely accessible copy of the articles.

It’s important to realize that the goal is *not* to laboriously create a bad search engine. Google Scholar already does a very good job of identifying freely accessible copies of particular mathematics articles. The goal is to be able to definitively answer questions such as “which journals are primarily, or even entirely, freely accessible?”, to track progress towards making the mathematical literature more accessible, and finally to draw attention to, and focus enthusiasm for, such progress.

I think it’s essential, although this is not obvious, that at first the database is primarily created “by hand”. Certainly there is scope for computer programs to help a lot! (For example, by populating tables of contents, or querying google scholar or other sources to find freely accessible versions.) Nevertheless curation at the per-article level will certainly be necessary, and so whichever route one takes it must be possible for humans to edit the database. I think that starting off with the goal of primarily human contributions achieved two purposes: one, it provides an immediate means to recruit and organize interested participants, and two, hopefully it allows much more flexibility in the design and organization of the collected data — hopefully many eyes will reveal bad decisions early, while they’re easy to fix.

That said, we better remember that eventually computers may be very helpful, and avoid design decisions that make computer interaction with the database difficult.

What should this database look like? I’m imagining a website containing a list of journals (at first perhaps just one), and for each journal a list of issues, and for each issue a table of contents.

The table of contents might be very simple, having as few as four columns: the title, the authors, the link to the publishers webpage, and a freely accessible link, if known. All these lists and table of contents entries must be editable by a user — if, for example no freely accessible link is known, this fact should be displayed along with a prominent link or button which allows a reader to contribute one.

At this point I think it’s time to consider what software might drive this website. One option is to build something specifically tailored to the purpose. Another is to use an essentially off-the-shelf wiki, for example tiddlywiki as Tim Gowers used when analyzing an issue of Discrete Math.

Custom software is of course great, but it takes programming experience and resources. (That said, perhaps not much — I’m confident I could make something usable myself, and I know people who could do it in a more reasonable timespan!) I want to essentially ignore this possibility, and instead use mediawiki (the wiki software driving wikipedia) to build a very simple database that is readable and editable by both humans and computers. If you’re impatient, jump to http://tqft.net/mlp and start editing! I’ve previously used it to develop the Knot Atlas at http://katlas.org/ with Dror Bar-Natan (and subsequently many wiki editors). There we solved a very similar set of problems, achieving human readable and editable pages, with “under the hood” a very simple database maintained directly in the wiki.

]]>

One of the central problems in fusion categories is to determine to what extent fusion categories can be classified in terms of finite groups and quantum groups (perhaps combined in strange ways) or whether there are exceptional fusion categories which cannot be so classified. My money is on the latter, and in particular I think extended Haagerup gives an exotic fusion category. However, there are a number of examples which seem to involve finite groups, but where we don’t know how to classify them in terms of group theoretic data. For example, the Haagerup fusion category has a 3-fold symmetry and may be built from or (as suggested by Evans-Gannon). The simplest examples of these kind of “close to group” categories, are called “near-group categories” which have only one non-invertible object and have the fusion rules

for some group of invertible objects . A result of Evans-Gannon (independently proved by Izumi in slightly more generality), says that outside of a reasonably well understood case (where and the category is described by group theoretic data), we have that must be a multiple of . There are the Tambara-Yamagami categories where , and many examples (E6, examples of Izumi, many examples of Evans-Gannon) where

Here’s the question: Are there examples where n is larger than ?

It turns out the answer is yes! In fact the answer is given by the -graded part of the quantum subgroup of quantum from Ocneanu’s tables here. I’ll explain why below.

The category of representations of a group has a restriction functor to the category of representations of any subgroup. This suggests a generalization of the notion of “subgroup” to an arbitrary tensor category. If C is a tensor category, then a “quantum subgroup” (of type I) is a tensor category D with a tensor functor which is dominant (every object in D is a summand of an object in the image of F). In particular, this makes D into a module category. (A simple module category which doesn’t come from a tensor functor is called a subgroup of type II, Ocneanu’s list includes both types.)

Ocneanu’s notation here is as follows. The quantum subgroups of type I are the ones with a starred vertex. The vertices of the graph are the simple objects in D, and the starred vertex is the trivial object. The category of representations of quantum SU(3) is -graded. Sometimes this grading descends to a grading on D, if it does then Ocneanu denotes the grading by coloring the vertices white, grey, and black. Note that the zero graded vertices (the white ones) form a tensor subcategory.

The edges of the graph give the fusion rules for tensoring with the fundamental representations of SU(3). That is the number of edges from A to B is the dimension of the hom spaces between where is a fundamental representation. In fact,there are two fundamental representations, and , and it is possible to distinguish which edge is which by looking at the coloring of vertices since adds 1 to the grading and subtracts 1.

(The graph is also decorated with some additional information, that won’t be needed here. For example, the vertices are circled if the object is “dyslectic” and the subcategory of dyslectic objects is braided.)

For E9 there are four objects which are 0-graded. Three of dimension 1 (which we call 1, , and ), and one of dimension which we will call . We can compute that . Thus we can work out the rules for tensoring with by counting paths of length 2 which go white-black-white, but subtracting 1 from the total count of paths from a vertex to itself. Using this we can see that , where this 6 appears as . Of course, .

If you know what a conformal inclusion is (I only sort of do), this example comes from the conformal inclusion of SU(3) at level 9 including into the exceptional group E6 at level 1.

Also, it turns out that for the special case when G is , a result of (then high school student) Hannah Larson, shows that n can’t be any larger than 6. This example shows that her result is sharp.

I think as the amount of mathematics grows, it will be increasingly important to find better ways to arrange old information in ways that make searching easier. In a perfect world, there should be a searchable database of fusion categories where one could just ask for all known examples of rank 4 fusion categories with 3 invertible objects and have this example returned. (In this case, the example would definitely be in the database because the paper itself is well-known, it just has a huge list of examples.)

(Finally, I’d like to note that this example will eventually be mentioned in the formal literature in a paper of Zhengwei’s.)

]]>

The first few paragraphs here are from the Hoffman-Singleton paper, although with notation I like better. The Hoffman-Singleton paper is not easily available online, but Zaman’s exposition is very similar. Fix a vertex to start things off. Let , for , , …, be its neighbors. Let the other neighbors of be for , …, . Write for the set . Between , the ‘s and the ‘s, we have described all the vertices of the graph, and we have described all of the edges which go into or into a ; the remaining edges connect one to another.

There can be no edge between and , as there is already a length path . So all the remaining edges go between and for . Fixing distinct values and , and a value , there must be one length path from to , so there must be exactly one which is joined to . Similarly, for each , with fixed, there is exactly on bordering . So the edges of the graph form a bijection between and , for each $i \neq j$. Call this bijection . In short, we have bijections to describe.

One can check that the Hoffman-Singleton property is equivalent to the claim that, for all distinct , , and , the compositions and have no fixed points.

At this point, Hoffman and Singleton get into a pretty detailed case analysis to prove that there is a unique choice for the (up to the action of ) and they list that solution. I’ll give a much simpler description below, but the real point I want to make is that this really looks like a question about finding grouoids equivalent to with certain properties. Unfortunately, this is an evil question to ask about groupoids. Still, I wonder, are there any groupoid theorems that would make it obvious that there is a unique solution?

I originally started writing this post to point out that the solution has a unique description. As mentioned above, Junker found this before me and says that others found it before him:

**Construction** Let be an outer automorphism of . Then we can take and take for .

It is quite easy to check that the compositions and create the permutations and , and those permutations have cycle structure and . In particular, they have no fixed points.

]]>

With more than 400 comments tacked on to the previous blog post, it’s past time to rollover to a new one. As just punishment for having contributed more than my fair share of those comments, Scott has asked me to write a guest post summarizing the current state of affairs. This task is made easier by Tao’s recent progress report on the polymath project to sharpen Zhang’s result on bounded gaps between primes. If you haven’t already read the progress report I encourage you to do so, but for the benefit of newcomers who would like to understand how our quest for narrow admissible tuples fits in the bounded prime gaps polymath project, here goes.

The Hardy-Littlewood prime tuples conjecture states that every admissible tuple has infinitely many translates that consist entirely of primes. Here a tuple is simply a set of integers, which we view as an increasing sequence ; we refer to a tuple of size as a -tuple. A tuple is *admissible* if it does not contain a complete set of residues modulo any prime . For example, 0,2,4 is not an admissible 3-tuple, but both 0,2,6 and 0,4,6 are. A *translate* of a tuple is obtained by adding a fixed integer to each element; the sequences 5,7,11 and 11,13,17 are the first two translates of 0,2,6 that consist entirely of primes, and we expect that there are infinitely more. Admissibility is clearly a necessary condition for a tuple to have infinitely many translates made up of primes; the conjecture is that it is also sufficient.

Zhang proved a weakened form the prime tuples conjecture, namely, that for all sufficiently large , every admissible -tuple has infinitely many translates that contain at least 2 primes (as opposed to ). He made this result explicit by showing that one may take , and then noted the existence of an admissible -tuple with diameter (difference of largest and smallest elements) less than 70,000,000. Zhang’s -tuple consists of the first primes greater than , which is clearly admissible. As observed by Trudgian, the diameter of this -tuple is actually less than 60,000,000 (it is precisely 59,874,954).

Further improvements to Zhang’s bound came rapidly, first by finding narrower admissible -tuples, then by optimizing and the critical parameter on which it depends (this means making larger; is proportional to ). Since it began on June 4, the polymath8 project has been working along three main lines of attack: (1) improving bounds on and a related parameter , (2) deriving smaller values of from a given pair , and (3) the search for narrow admissible -tuples.You can see the steady progress that has been made on these three interlocking fronts by viewing the list of world records.

A brief perusal of this list makes it clear that, other than some quick initial advances made by tightening obvious slack in Zhang’s bounds, most of the big gains have come from improving the bounds on (edit: as pointed out by v08ltu below, reducing the dependence of on from to was also a major advance); see Tao’s progress report and related blog posts for a summary of this work. Once new values of and have been established, it is now relatively straight-forward to derive an optimal (at least within 1 or 2; the introduction of Pintz’s method has streamlined this process). There then remains the task of finding admissible -tuples that are as narrow as possible; it is this last step that is the subject of this blog post and the two that preceded it. Our goal is to compute , the smallest possible diameter of an admissible -tuple, or at least to obtain bounds (particularly upper bounds) that are as tight as we can make them.

A general way to construct a narrow admissible -tuple is to suppose that we first sieve the integers of one residue class modulo each prime and then choose a set of survivors, preferably ones that are as close together as possible. In fact, it is usually not necessary to sieve a residue class for every prime in order to obtain an admissible -tuple, asymptotically an bound should suffice. The exact number of residue classes that require sieving depends not only on , but also on the interval in which one looks for survivors (it could also depend on the order in which one sieves residue classes, but we will ignore this issue).

All of the initial methods we considered involved sieving residue classes 0 mod , and varied only in where to look for the survivors. Zhang takes the first survivors greater than 1 (after sieving modulo primes up to ), and Morrison’s early optimizations effectively did the same, but with a lower sieving bound. The Hensley-Richards approach instead selects survivors from an interval centered at the origin, and the asymmetric Hensley-Richards optimization shifts this interval slightly (see our wiki page for precise descriptions of each of these approaches, along with benchmark results for particular values of interest).

But there are sound practical reasons for *not* always sieving 0 mod . Assuming we believe the prime tuples conjecture (which we do!), we can certainly find an optimally narrow admissible -tuple somewhere among the primes greater than , all of which survive sieving 0 modulo primes . However, the quantitative form of the prime tuples conjecture tells us roughly how far we might need to search in order to find one. The answer is depressingly large: the expected number of translates of any particular admissible -tuple to be found among the primes is thus we may need to search through the primes in an interval of size exponential in in order to have a good chance of finding even one translate of the -tuple we seek.

Schinzel suggested that it would be better to sieve 1 mod 2 rather than 0 mod 2, and more generally to sieve 1 mod for all primes up to some intermediate bound and then switch to sieving 0 mod for the remaining primes. We find that simply following Schinzel’s initial suggestion, works best, and one can see the improvement this yields on the benchmarks page (unlike Schinzel, we don’t restrict ourselves to picking the first survivors to the right of the origin, we may shift the interval to obtain a better bound).

But sieving a fixed set of residue classes is still too restrictive. In order to find narrower admissible tuples we must relax this constraint and instead consider a greedy approach, where we start by picking an interval in which we hope to find survivors (we know that the size of this interval should be just slightly larger than ), and then run through the primes in order, sieving whichever residue class is least occupied by survivors (we can break ties in any way we like, including at random). Unfortunately a purely greedy approach does not work very well. What works much better is to start with a Schinzel sieve, sieving 1 mod 2 and 0 mod primes up to a bound slightly smaller than , and then start making greedy choices. Initially the greedy choice will tend to be the residue class 0 mod , but it will deviate as the primes get larger. For best results the choice of interval is based on the success of the greedy sieving.

This is known as the “greedy-greedy” algorithm, and while it may not have a particularly well chosen name (this is the downside of doing math in a blog comment thread, you tend to throw out the first thing that comes to mind and then get stuck with it), it performs remarkably well. For the values of listed in the benchmarks table, the output of the greedy-greedy algorithm is within 1 percent of the best results known, even for , where the optimal value is known.

But what about that last 1 percent? Here we switch to local optimizations, taking a good admissible tuple (e.g. one output by the greedy-greedy algorithm) and trying to make it better. There are several methods for doing this, some involve swapping a small set of sieved residue classes for a different set, others shift the tuple by adding elements at one end and deleting them from the other. Another approach is to randomly perturb the tuple by adding additional elements that make it inadmissible and then re-sieving to obtain a new admissible tuple. This can be done in a structured way by using a randomized version of the greedy-greedy algorithm to obtain a similar but slightly different admissible tuple in approximately the same interval, merging it with the reference tuple, and then re-sieving to obtain a new admissible tuple. These operations can all be iterated and interleaved, ideally producing a narrower admissible tuple. But even when this does not happen one often obtains a different admissible tuple with the same diameter, providing another reference tuple against which further local optimizations may be applied.

Recently, improvements in the bounds on brought below 4507, and we entered a regime where good bounds on are already known, thanks to prior work by Engelsma. His work was motivated by the second Hardy-Littlewood conjecture, which claims that for all , a claim that Hensley and Richards showed is asymptotically incompatible with the prime tuples conjecture (and now generally believed to be false). Engelsma was able to find an admissible 447-tuple with diameter 3158, implying that if the prime tuples conjecture holds then there exists an (infinitely many in fact) for which , which is greater than . In the process of obtaining this result, Engelsma spent several years doing extensive computations, and obtained provably optimal bounds on for all , as well as upper bounds on for . The quality of these upper bounds is better in some regions than in others (Engelsma naturally focused on the areas that were most directly related to his research), but they are generally quite good, and for up to about 700 believed to be the best possible.

We have now merged our results with those of Engelsma and placed them in an online database of narrow admissible -tuples that currently holds records for all up to 5000. The database is also accepts submissions of better admissible tuples, and we invite anyone and everyone to try and improve it. Since it went online a week ago it has processed over 1000 submissions, and currently holds tuples that improve Engelsma’s bounds at 1927 values of , the smallest of which is 785. As I write this stands at (subject to confirmation), which happens to be one of the places where we have made an improvement, yielding a current prime gap bound of 6,712 (but I expect this may drop again soon).

In addition to supporting the polymath prime gaps project, we expect this database will have other applications, including further investigations of the second Hardy-Littlewood conjecture. As can be seen in this chart, not only have we found many examples of admissible -tuples whose diameter satisfies , one can view the growth rate of the implied lower bounds on in this chart.

]]>