One of my amateur interests is paleontology. Paleontologists looking for new examples have two options: go out in the field and dig up a new example, or go looking through drawers of museums to find old examples that had been overlooked. In this blog post I want to give an interesting example of the latter kind of research being useful in mathematics. Namely in discussions with Zhengwei Liu, we realized that an old example of Ocneanu’s gives an answer to a question that was thought to be open.
One of the central problems in fusion categories is to determine to what extent fusion categories can be classified in terms of finite groups and quantum groups (perhaps combined in strange ways) or whether there are exceptional fusion categories which cannot be so classified. My money is on the latter, and in particular I think extended Haagerup gives an exotic fusion category. However, there are a number of examples which seem to involve finite groups, but where we don’t know how to classify them in terms of group theoretic data. For example, the Haagerup fusion category has a 3-fold symmetry and may be built from or (as suggested by Evans-Gannon). The simplest examples of these kind of “close to group” categories, are called “near-group categories” which have only one non-invertible object and have the fusion rules
for some group of invertible objects . A result of Evans-Gannon (independently proved by Izumi in slightly more generality), says that outside of a reasonably well understood case (where and the category is described by group theoretic data), we have that must be a multiple of . There are the Tambara-Yamagami categories where , and many examples (E6, examples of Izumi, many examples of Evans-Gannon) where
Here’s the question: Are there examples where n is larger than ?
It turns out the answer is yes! In fact the answer is given by the -graded part of the quantum subgroup of quantum from Ocneanu’s tables here. I’ll explain why below.
This may not be of interest to most of our readers, but I have sad news that’s relevant to many of the bloggers. Last weekend Raleigh’s burned down. It was the traditional place for beers after the seminar for which this blog is named, and the first draft of my qual syllabus was originally written on a Raleigh’s napkin (back when they had napkins that were perfect for writing math on). It’s always sad to lose a place that felt like home. Have a drink outside in memory.
The NSF recently announced some new policies concerning work-life balance. There seems to have been a publicity push about it on the part of the White House, as it made the regular news. The main changes seem to be adding flexibility to grant rules for new parents. Mostly pretty obvious stuff like letting people delay the use of their grant if they go on parental leave. Good ideas to be sure, but mostly just catching up to what they already should have been doing.
This reminded me of one of my favorite ideas I’ve heard for an NSF policy change which would help career-life balance. Currently the MSPRF postdoc policy reads:
Changes in the host institution will be approved only under extremely unusual and compelling circumstances… Securing a position at an institution other than the proposed host institution is not considered an “extremely unusual and compelling circumstance.”
The suggestion is to change this by adding the line:
Nonetheless, if the fellow has a partner who is unable to procure a job near the sponsoring institution, and both the fellow and their partner have job offers in other city, that will be considered compelling circumstances.
Masaki Izumi, Vaughan Jones, Scott Morrison and I recently uploaded to the arXiv the 3rd and final part of the four part series “Subfactors of index less than 5.” This is a project we’ve been working on for a long time (since Emily, Scott and I started running Planar Algebra Programming Camps in spring of ’08), and after three years and a lot of work from many people it’s very exciting to finally have made it there.
In this post I’ll state the main theorem, say a few words about the history, and then explain the main takeaway lesson we learned in this project.
As mathematicians we spend most of our lives confused about something or other. Of course, this is occasionally interrupted by moments of clarity that make it worth it. I wanted to discuss a particularly pleasant circumstance: when two confusions annihilate each other. I’ll give two examples of times that this happened to me, but people are encouraged to provide similar examples in the comments.
In both cases what happened was that I had:
- A question to which I didn’t know the answer
- An answer to which I didn’t know the question
In quantum algebra we’re often studying some classical algebraic notion, but instead of working in the category of vector spaces you instead work in a more general tensor category. For example, the theory of finite type knot invariants is roughly the theory of simple Lie algebra objects in symmetric tensor categories, while the theory of subfactors is roughly that of simple algebra objects in unitary tensor categories. The basic question is then which notions from the classical theory generalize to the quantum setting. For example, is there an analogue of Artin-Wedderburn for semisimple algebra objects in fusion categories? The goal of this post is to argue that the following theorem (due to Ostrik, modulo any errors I’ve introduced) gives a satisfactory generalization.
Any semisimple algebra object in a fusion category is isomorphic (as an algebra object) to the internal endomorphisms End(X) for some object X in a semisimple module category over .
First I’ll unpack the definitions in this statement and then I’ll explain how Artin-Wedderburn for semisimple algebras over a fixed field k follows from this statement. I’ve been thinking about this theorem because Pinhas Grossman and I have been using it to classify “quantum subgroups” of the Haagerup fusion categories, but that’s a story for another day.
It is a truth universally acknowledged that journals fail to add significant value in a way that justifies their high prices (we write, typeset, referee, edit, and they do basically nothing except charge an arm and a leg for it). However, I think it is underappreciated the ways in which some journals actually take away value. Typically by wasting our time with bad interfaces or imposing unreasonable typesetting/file format requirements. I’m in the middle of a particularly hellacious experience with the Journal of Functional Analysis (whose support staff have been unhelpful on top of incompetent) but I’ve also run into similar inconveniences with IMRN (where at least the support staff was helpful in getting around the problems).
Suppose we lived in a world where journals did the following
- Took submissions of papers by receiving their arXiv ID number.
- Refereed them and had the authors make necessary changes.
- Slapped the journal’s logo on the paper and called it accepted.
That to me is the baseline of how things should work (and is roughly how things do work at many journals: ANT/G&T/AGT obviously, but also CMP/JAMS/Acta were more or less similar). Anything else the journals do beyond that should add value rather than remove it. Here are ways that journals often remove value:
- Requiring additional typsetting work prior to submission. I’m happy to do a little bit of grunt work on an accepted paper, but it’s very frustrating to struggle to just submit a paper. ArXiv or PDF should be good enough for submission.
- Having difficult to use and poorly engineered submission systems. (E.g. JFA has no way of allowing you to delete multiple files you’ve uploaded. So if you upload 200 images and then need to change them because their system failed to compile you need to remove each file manually.)
- Having unnecessarily strict file format requirements (e.g. JFA doesn’t want .png, and IMRN wasn’t able to deal with TikZ).
- Having strange limitations on how files can be uploaded, in particular not allowing subfolders (JFA and IMRN) or only allowing particular sorts of zip formats (IMRN).
- Inserting the evil “et al.” into citations.
- Update: Introducing mathematical errors during copy-editing
Any other important ways that journals remove value that I’m missing?
UPDATE This post has been attracting an extraordinary amount of spam. (See post above.) I (DES) have changed the title to see if that helps.
This fall I’ll be teaching my first regular college class (I’d only taught sections at Berkeley, though I suppose the summer sophomore tutorial I taught at Harvard might count). It’s on group representation theory, which is my favorite subject, so I’m excited about it. I was just thinking about some possible homework problems, and I got to thinking about creative and unusual grading schemes I’ve seen in previous classes I’d taken, and figured that might make a fun blog discussion topic. (Since this is my first time teaching I won’t be experimenting with any unusual grading this time around, though I think it might be interesting to try one of these in the future.)
At the Ross summer math program if you don’t answer a problem satisfactorily then you get a REDO. This means you’re expected to go back and redo the problem and get it right. I’ve never seen this tried in a regular class, but I think it could be a good idea for an “intro to proof writing” class. The point being that in such a class the material itself isn’t super important, and so if you do fewer homework problems total but learn how to do them right that’s a good tradeoff.
Grading out of many points:
When I took group representation theory from Richard Taylor, the exams were graded out of a ridiculous number of points. A 5 question midterm would be out of 600 or so points. At first glance this seems silly (and it certainly would be a bad idea for a class with multiple graders where you want consistency between graders), but it actually works very well. Here’s the point: if someone does something you don’t like no matter how small it is you can take off points! Unclear sentence? Minus 1. Used the wrong terminology? Minus 3 points. This way the grader can effectively communicate relatively small shortcomings in your write-ups, which wouldn’t be possible if you were grading out of a smaller number of points.
This idea comes from a class that I didn’t take our first year of grad school with Givental, so perhaps someone who took the class can correct me on the details. The basic idea that was for the final in addition to points for each problem you got, there was a pool of extra points which you got if you never wrote anything false on the exam. But as soon as you wrote something that was wrong you lost those points. This is good training for graduate students who soon won’t have graders telling them when they made a mistake, and it’s a good way to keep people from spewing nonsense in an attempt to get partial credit. If I remember correctly the perfection bonus was quite substantial (I want to say it was worth as much as a full question on an exam where you need a little more than 2 correct solutions.)
What do people think of these ideas? Any other interesting grading schemes you’ve heard of?
Frank Calegari, Scott Morrison, and I recently uploaded to the arxiv our paper Cyclotomic integers, fusion categories, and subfactors. In this paper we give two applications of cyclotomic number theory to quantum algebra.
- A complete list of possible Frobenius-Perron dimensions in the interval (2, 76/33) for an object in a fusion category.
- Given a family of graphs G_n obtained from a graph G by attaching a chain of n edges to a chosen vertex, an effective bound on the greatest n so that G_n can be the principal graph of a subfactor.
Neither of these results look like they involve number theory. The connection comes from a result of Etingof, Nikshych, and Ostrik which says that the dimension of every object in a fusion category is a cyclotomic integer.
A possible subtitle to this paper is
What’s so special about ?