jump to navigation

The many principles of conservation of number March 4, 2014

Posted by David Speyer in Uncategorized.
1 comment so far

In algebraic geometry, we like to make statements like: “two conics meet at {4} points”, “a degree four plane curve has {28} bitangents”, “given four lines in three space, there are {2} lines that meet all of them”. In each of these, we are saying that, as some parameter (the conics, the degree four curve, the lines) changes, the number of solutions to some equation stays constant. The “principle of conservation of number” refers to various theorems which make this precise.

In my experience, students in algebraic geometry tend to pick up the rough idea but remain hazy on the details, most likely because there are many different ways to make these details precise. I decided to try and write down all the basic results I could think of along these lines.

(more…)

Mathematical Research Community on Cluster Algebras in Utah this summer February 26, 2014

Posted by David Speyer in Uncategorized.
add a comment

This June 8 to 14, there will be a week long gathering in Snowbird, Utah for young mathematicians working on cluster algebras. The target audience here are either current graduate students, or people with Ph. D. in the last 3 or so years, who would be ready to start working on problems in cluster algebras. The hope is to spend a lot of time getting collaborations and projects going during the week. The organizers are Michael Gekhtman, Mark Gross, Gregg Musiker, Gordana Todorov and me.

We still have room for a number more applicants, so we would like to encourage more of you to apply. Please note that the application deadline of March 1 is firm.

Australian Research Council journal list February 24, 2014

Posted by Scott Morrison in Uncategorized.
13 comments

(This post may only be of interest to Australian mathematicians; sorry!)

Summary: A number of mathematics journals (e.g. Quantum Topology, Forum of Mathematics Sigma and Pi, and probably many others), are not listed on the new official journal list in Australia. Please, help identify missing journals, and submit feedback via http://jacci.arc.gov.au/.

Every few years the Australian Research Council updates their “official list of journals”. One might wonder why it’s necessary to have such a list, but nevertheless it is there, and it is important that it is accurate because the research outputs of Australian mathematicians are essentially filtered by this list for various purposes.

There is a new draft list out, and the purpose of this post is to coordinate finding missing journals, and to ensure that interested mathematicians submit feedback before the deadline of March 15. Please note that while in the past this list included dubious rankings of journals, the current list is just meant to track all peer reviewed journals in each subject. Having a journal missing entirely means that some published papers will not be counted in measures of a department’s or university’s research output.

You can access the full list here, just journals marked as mathematics here, and just the journals marked a pure mathematics here. These are not the “official” lists, which you have to create an account (follow the instructions at http://www.arc.gov.au/era/current_consult.htm) to view, and even then only an Excel version is available. I hope that by making these mathematics specific lists available in a standard format, more mathematicians will take the time to look over the list.

Please look through the lists. If you see something missing, please comment here so we all know about it. In any case, please submit feedback via http://jacci.arc.gov.au/ (you’ll have to create an account first) recommending inclusion of the journals identified so far. Submitting a missing journal requires identifying an article published in it by an Australia author; feel free to add this information here as well if appropriate. (Thanks to Anthony Henderson for pointing out this detail!)

It is also possible to submit additional “FoR” (field of research) codes for journals on the list, and this may be of interest to people publishing cross-disciplinary research. Feel free to make suggestions along these line here too: the AustMS has been advised that “multiple responses, rather than a single AustMS one, will carry more weight on this aspect”.

Course on categorical actions February 7, 2014

Posted by Ben Webster in Shamelss Self Promotion.
6 comments

I have the excellent luck to be sending this semester in Paris, thanks to the Fondation Sciences Mathématiques de Paris.  Part of the deal is that I’m giving a weekly course at the “graduate level” (though I think I have more professors than graduate students in the course) on higher representation theory.  Also thanks to FSMP, the course is being videotaped and posted online;  the first installment is up here.  I’m also posting the videos and additional commentary on a WordPress site; if you have any questions, you can always ask them there (or here, but maybe it’s more germane there).   

Postdocs at ANU January 23, 2014

Posted by Scott Morrison in jobs.
comments closed

Tony Licata and I are each now hiring a postdoc at the Mathematical Sciences Institute of the Australian National University.

We intend that these will be 2 year positions, with minimal teaching requirements.

There is an informal description of the jobs at http://tqft.net/web/postdoc, including some information about the grants funding these positions. The official ad is online at http://jobs.anu.edu.au/PositionDetail.aspx?p=3736, and you can find it on MathJobs at http://www.mathjobs.org/jobs/jobs/5678.

Please contact us if you have questions, and please encourage good Ph.D. students (especially with interests in subfactors, fusion categories, categorification, or related subjects) to apply!

Mathematics Literature Project progress January 6, 2014

Posted by Scott Morrison in Uncategorized.
comments closed

We’ve made some good progress over at the Mathematics Literature Project. In particular, we’ve completely analyzed the 2013 issues of five journals:

(The colour coded bars show the fractions of papers available on the arXiv, available on authors’ webpages, and not freely accessible at all; these now appear all over the wiki, but unfortunately don’t update automatically. Over at the wiki you can hover over these bars to get the numerical totals, too.)

Thanks everyone for your contributions so far! If you’ve just arrived, check out the tutorial I made on editing the wiki. Now, it’s time to do a little planning.

What questions should we be asking?

Here’s one we can start to answer right away.

What fraction of recent papers are available on the arXiv or on authors webpages?

For good generalist journals (e.g. Adv. Math. and Annals), almost everything! For subject area journals, there is wide variation (probably mostly depending on traditions in subfields): AGT is almost completely freely accessible, while Discrete Math. is at most half.

I hope we’ll soon be able to say this for many other journals, too.

Here’s the question I really want to have answers for:

Does being freely accessible correlate well with quality?

It’s certainly tempting to think so, seeing how accessible Advances and Annals are. I think to really answer this question we’re going to have to classify all the articles in slightly older issues (2010?) and then start looking at the citation counts for articles in the two pools. If we get coverage of more journals, we can also look for the correlation between, say, impact factor and the ratio of freely accessible content.

What next?

I don’t want to just list every journal on the wiki; it’s best if editors (and the helpful bots working in the background) can focus attention and enjoy the pleasures of finishing off issues and journals. Suggestions for journals to add next welcome in the comments. I’ve already included the tables of contents for the Journal of Number Theory, and the Journal of Functional Analysis. (It will be nice to be able to make comparisons between JFA and GAFA, I think.)

I’ve been working with some people on automating the entry of data in the wiki (mainly by using arXiv metadata; there are actually way more articles there with journal references and DOIs than I’d expected). Hopefully this will make the wiki editing experience more fun, as a lot of the work will have already been done, and humans just get to handle the hard and interesting cases.

An editable database tracking freely accessible mathematics literature. January 3, 2014

Posted by Scott Morrison in papers, publishing, Uncategorized, websites.
Tags:
comments closed

(This post continues a discussion started by Tim Gowers on google+. [1] [2])

(For the impatient, go visit http://tqft.net/mlp, or for the really impatient http://tqft.net/mlp/wiki/Adv._Math./232_(2013).)

It would be nice to know how much of the mathematical literature is freely accessible. Here by ‘freely accessible’ I mean “there is a URL which, in any browser anywhere in the world, resolves to the contents of the article”. (And my intention throughout is that this article is legitimately hosted, either on the arxiv, on an institutional repository, or on an author’s webpage, but I don’t care how the article is actually licensed.) I think it’s going to be okay to not worry too much about discrepancies between the published version and a freely accessible version — we’re all grown ups and understand that these things happen. Perhaps a short comment field, containing for example “minor differences from the published version” could be provided when necessary.

This post outlines an idea to achieve this, via a human editable database containing the tables of contents of journals, and links, where available, to a freely accessible copy of the articles.

It’s important to realize that the goal is *not* to laboriously create a bad search engine. Google Scholar already does a very good job of identifying freely accessible copies of particular mathematics articles. The goal is to be able to definitively answer questions such as “which journals are primarily, or even entirely, freely accessible?”, to track progress towards making the mathematical literature more accessible, and finally to draw attention to, and focus enthusiasm for, such progress.

I think it’s essential, although this is not obvious, that at first the database is primarily created “by hand”. Certainly there is scope for computer programs to help a lot! (For example, by populating tables of contents, or querying google scholar or other sources to find freely accessible versions.) Nevertheless curation at the per-article level will certainly be necessary, and so whichever route one takes it must be possible for humans to edit the database. I think that starting off with the goal of primarily human contributions achieved two purposes: one, it provides an immediate means to recruit and organize interested participants, and two, hopefully it allows much more flexibility in the design and organization of the collected data — hopefully many eyes will reveal bad decisions early, while they’re easy to fix.

That said, we better remember that eventually computers may be very helpful, and avoid design decisions that make computer interaction with the database difficult.

What should this database look like? I’m imagining a website containing a list of journals (at first perhaps just one), and for each journal a list of issues, and for each issue a table of contents.

The table of contents might be very simple, having as few as four columns: the title, the authors, the link to the publishers webpage, and a freely accessible link, if known. All these lists and table of contents entries must be editable by a user — if, for example no freely accessible link is known, this fact should be displayed along with a prominent link or button which allows a reader to contribute one.

At this point I think it’s time to consider what software might drive this website. One option is to build something specifically tailored to the purpose. Another is to use an essentially off-the-shelf wiki, for example tiddlywiki as Tim Gowers used when analyzing an issue of Discrete Math.

Custom software is of course great, but it takes programming experience and resources. (That said, perhaps not much — I’m confident I could make something usable myself, and I know people who could do it in a more reasonable timespan!) I want to essentially ignore this possibility, and instead use mediawiki (the wiki software driving wikipedia) to build a very simple database that is readable and editable by both humans and computers. If you’re impatient, jump to http://tqft.net/mlp and start editing! I’ve previously used it to develop the Knot Atlas at http://katlas.org/ with Dror Bar-Natan (and subsequently many wiki editors). There we solved a very similar set of problems, achieving human readable and editable pages, with “under the hood” a very simple database maintained directly in the wiki.

From the drawers of the museum December 12, 2013

Posted by Noah Snyder in fusion categories, quantum groups, subfactors, Uncategorized.
comments closed

One of my amateur interests is paleontology. Paleontologists looking for new examples have two options: go out in the field and dig up a new example, or go looking through drawers of museums to find old examples that had been overlooked. In this blog post I want to give an interesting example of the latter kind of research being useful in mathematics. Namely in discussions with Zhengwei Liu, we realized that an old example of Ocneanu’s gives an answer to a question that was thought to be open.

One of the central problems in fusion categories is to determine to what extent fusion categories can be classified in terms of finite groups and quantum groups (perhaps combined in strange ways) or whether there are exceptional fusion categories which cannot be so classified. My money is on the latter, and in particular I think extended Haagerup gives an exotic fusion category. However, there are a number of examples which seem to involve finite groups, but where we don’t know how to classify them in terms of group theoretic data. For example, the Haagerup fusion category has a 3-fold symmetry and may be built from \mathbb{Z}/3\mathbb{Z} or S_3 (as suggested by Evans-Gannon). The simplest examples of these kind of “close to group” categories, are called “near-group categories” which have only one non-invertible object and have the fusion rules

X^2 \cong X^{\oplus n} + \sum_g g

for some group of invertible objects g. A result of Evans-Gannon (independently proved by Izumi in slightly more generality), says that outside of a reasonably well understood case (where n = \#G -1 and the category is described by group theoretic data), we have that n must be a multiple of \# G. There are the Tambara-Yamagami categories where n = 0, and many examples (E6, examples of Izumi, many examples of Evans-Gannon) where n = \#G

Here’s the question: Are there examples where n is larger than \# G?

It turns out the answer is yes! In fact the answer is given by the 0-graded part of the quantum subgroup E_9 of quantum SU(3) from Ocneanu’s tables here. I’ll explain why below.

(more…)

The Hoffman-Singleton graph and groupoids October 16, 2013

Posted by David Speyer in Uncategorized.
comments closed

The Hoffman-Singleton graph is the unique graph on 50 vertices with the following property: Every vertex is of degree 7 and, between any two vertices, there is either an edge or a path of length two, but not both. The Hoffman-Singleton graph has a large symmetry group — order 252,000 — and there are many ways to describe it that emphasize different symmetry properties. Various constructions describe it in terms of the geometry of the affine plane \mathbb{F}_5^2, the projective space \mathbb{P}^3(\mathbb{F}_2) or just pure combinatorics. Here is one more that I noticed the other day when reading through the original Hoffman-Singleton paper. While turning it into a blogpost, I noticed that the same observation was made by Markus Junker in 2005.

(more…)

The quest for narrow admissible tuples July 2, 2013

Posted by Scott Morrison in polymath.
Tags: , ,
comments closed

(A guest post by Andrew Sutherland.)

With more than 400 comments tacked on to the previous blog post, it’s past time to rollover to a new one. As just punishment for having contributed more than my fair share of those comments, Scott has asked me to write a guest post summarizing the current state of affairs. This task is made easier by Tao’s recent progress report on the polymath project to sharpen Zhang’s result on bounded gaps between primes. If you haven’t already read the progress report I encourage you to do so, but for the benefit of newcomers who would like to understand how our quest for narrow admissible tuples fits in the bounded prime gaps polymath project, here goes.

The Hardy-Littlewood prime tuples conjecture states that every admissible tuple has infinitely many translates that consist entirely of primes. Here a tuple is simply a set of integers, which we view as an increasing sequence t_1 < t_2 < \ldots < t_k; we refer to a tuple of size k as a k-tuple. A tuple is admissible if it does not contain a complete set of residues modulo any prime p. For example, 0,2,4 is not an admissible 3-tuple, but both 0,2,6 and 0,4,6 are. A translate of a tuple is obtained by adding a fixed integer to each element; the sequences 5,7,11 and 11,13,17 are the first two translates of 0,2,6 that consist entirely of primes, and we expect that there are infinitely more. Admissibility is clearly a necessary condition for a tuple to have infinitely many translates made up of primes; the conjecture is that it is also sufficient.

Zhang proved a weakened form the prime tuples conjecture, namely, that for all sufficiently large k\ge k_0, every admissible k-tuple has infinitely many translates that contain at least 2 primes (as opposed to k). He made this result explicit by showing that one may take k_0=3,500,000, and then noted the existence of an admissible k_0-tuple with diameter (difference of largest and smallest elements) less than 70,000,000. Zhang’s k_0-tuple consists of the first k_0 primes greater than k_0, which is clearly admissible. As observed by Trudgian, the diameter of this k_0-tuple is actually less than 60,000,000 (it is precisely 59,874,954).

Further improvements to Zhang’s bound came rapidly, first by finding narrower admissible k_0-tuples, then by optimizing k_0 and the critical parameter \varpi on which it depends (this means making \varpi larger; k_0 is proportional to \varpi^{-3/2}). Since it began on June 4, the polymath8 project has been working along three main lines of attack: (1) improving bounds on \varpi and a related parameter \delta, (2) deriving smaller values of k_0 from a given pair (\varpi, \delta), and (3) the search for narrow admissible k_0-tuples.You can see the steady progress that has been made on these three interlocking fronts by viewing the list of world records.

A brief perusal of this list makes it clear that, other than some quick initial advances made by tightening obvious slack in Zhang’s bounds, most of the big gains have come from improving the bounds on \varpi (edit: as pointed out by v08ltu below, reducing the dependence of k_0 on \varpi from \varpi^{-2} to \varpi^{-3/2} was also a major advance); see Tao’s progress report and related blog posts for a summary of this work. Once new values of \varpi and \delta have been established, it is now relatively straight-forward to derive an optimal k_0 (at least within 1 or 2; the introduction of Pintz’s method has streamlined this process). There then remains the task of finding admissible k_0-tuples that are as narrow as possible; it is this last step that is the subject of this blog post and the two that preceded it. Our goal is to compute H(k_0), the smallest possible diameter of an admissible k_0-tuple, or at least to obtain bounds (particularly upper bounds) that are as tight as we can make them.

A general way to construct a narrow admissible k_0-tuple is to suppose that we first sieve the integers of one residue class modulo each prime p\le k_0 and then choose a set of k_0 survivors, preferably ones that are as close together as possible. In fact, it is usually not necessary to sieve a residue class for every prime p\le k_0 in order to obtain an admissible k_0-tuple, asymptotically an O(k_0/\log k_0) bound should suffice. The exact number of residue classes that require sieving depends not only on k_0, but also on the interval in which one looks for survivors (it could also depend on the order in which one sieves residue classes, but we will ignore this issue).

All of the initial methods we considered involved sieving residue classes 0 mod p, and varied only in where to look for the survivors. Zhang takes the first k_0 survivors greater than 1 (after sieving modulo primes up to k_0), and Morrison’s early optimizations effectively did the same, but with a lower sieving bound. The Hensley-Richards approach instead selects survivors from an interval centered at the origin, and the asymmetric Hensley-Richards optimization shifts this interval slightly (see our wiki page for precise descriptions of each of these approaches, along with benchmark results for particular k_0 values of interest).

But there are sound practical reasons for not always sieving 0 mod p. Assuming we believe the prime tuples conjecture (which we do!), we can certainly find an optimally narrow admissible k_0-tuple somewhere among the primes greater than k_0, all of which survive sieving 0 modulo primes p\le k_0. However, the quantitative form of the prime tuples conjecture tells us roughly how far we might need to search in order to find one. The answer is depressingly large: the expected number of translates of any particular admissible k_0-tuple to be found among the primes p\le x is O(x/\log^{k_0}x), thus we may need to search through the primes in an interval of size exponential in k_0 in order to have a good chance of finding even one translate of the k_0-tuple we seek.

Schinzel suggested that it would be better to sieve 1 mod 2 rather than 0 mod 2, and more generally to sieve 1 mod p for all primes up to some intermediate bound and then switch to sieving 0 mod p for the remaining primes. We find that simply following Schinzel’s initial suggestion, works best, and one can see the improvement this yields on the benchmarks page (unlike Schinzel, we don’t restrict ourselves to picking the first k_0 survivors to the right of the origin, we may shift the interval to obtain a better bound).

But sieving a fixed set of residue classes is still too restrictive. In order to find narrower admissible tuples we must relax this constraint and instead consider a greedy approach, where we start by picking an interval in which we hope to find k_0 survivors (we know that the size of this interval should be just slightly larger than k_0 \log k_0), and then run through the primes in order, sieving whichever residue class is least occupied by survivors (we can break ties in any way we like, including at random). Unfortunately a purely greedy approach does not work very well. What works much better is to start with a Schinzel sieve, sieving 1 mod 2 and 0 mod primes up to a bound slightly smaller than \sqrt{k_0\log k_0}, and then start making greedy choices. Initially the greedy choice will tend to be the residue class 0 mod p, but it will deviate as the primes get larger. For best results the choice of interval is based on the success of the greedy sieving.

This is known as the “greedy-greedy” algorithm, and while it may not have a particularly well chosen name (this is the downside of doing math in a blog comment thread, you tend to throw out the first thing that comes to mind and then get stuck with it), it performs remarkably well. For the values of k_0 listed in the benchmarks table, the output of the greedy-greedy algorithm is within 1 percent of the best results known, even for k_0 = 342, where the optimal value is known.

But what about that last 1 percent? Here we switch to local optimizations, taking a good admissible tuple (e.g. one output by the greedy-greedy algorithm) and trying to make it better. There are several methods for doing this, some involve swapping a small set of sieved residue classes for a different set, others shift the tuple by adding elements at one end and deleting them from the other. Another approach is to randomly perturb the tuple by adding additional elements that make it inadmissible and then re-sieving to obtain a new admissible tuple. This can be done in a structured way by using a randomized version of the greedy-greedy algorithm to obtain a similar but slightly different admissible tuple in approximately the same interval, merging it with the reference tuple, and then re-sieving to obtain a new admissible tuple. These operations can all be iterated and interleaved, ideally producing a narrower admissible tuple. But even when this does not happen one often obtains a different admissible tuple with the same diameter, providing another reference tuple against which further local optimizations may be applied.

Recently, improvements in the bounds on \varpi brought k_0 below 4507, and we entered a regime where good bounds on H(k) are already known, thanks to prior work by Engelsma. His work was motivated by the second Hardy-Littlewood conjecture, which claims that \pi(x+y)-\pi(x)\le \pi(y) for all x,y\ge 2, a claim that Hensley and Richards showed is asymptotically incompatible with the prime tuples conjecture (and now generally believed to be false). Engelsma was able to find an admissible 447-tuple with diameter 3158, implying that if the prime tuples conjecture holds then there exists an x (infinitely many in fact) for which \pi(x+3159)-\pi(x)=447, which is greater than \pi(3159) = 446. In the process of obtaining this result, Engelsma spent several years doing extensive computations, and obtained provably optimal bounds on H(k) for all k\le 342, as well as upper bounds on H(k) for k \le 4507. The quality of these upper bounds is better in some regions than in others (Engelsma naturally focused on the areas that were most directly related to his research), but they are generally quite good, and for k up to about 700 believed to be the best possible.

We have now merged our results with those of Engelsma and placed them in an online database of narrow admissible k-tuples that currently holds records for all k up to 5000. The database is also accepts submissions of better admissible tuples, and we invite anyone and everyone to try and improve it. Since it went online a week ago it has processed over 1000 submissions, and currently holds tuples that improve Engelsma’s bounds at 1927 values of k, the smallest of which is 785. As I write this k_0 stands at 873 (subject to confirmation), which happens to be one of the places where we have made an improvement, yielding a current prime gap bound of 6,712 (but I expect this may drop again soon).

In addition to supporting the polymath prime gaps project, we expect this database will have other applications, including further investigations of the second Hardy-Littlewood conjecture. As can be seen in this chart, not only have we found many examples of admissible k-tuples whose diameter d satisfies k > \pi (d+1), one can view the growth rate of the implied lower bounds on k - \pi(H(k)+1) in this chart.

Follow

Get every new post delivered to your Inbox.

Join 611 other followers