This is a plug for my first arXiv preprint, 0812.3440. It didn’t really exist as an independent entity until about a month ago, when I got a little frustrated writing a larger paper and decided to package some results separately. It is the first in a series of n (where n is about five right now), attacking the generalized moonshine conjecture. Perhaps the most significant result is that nontrivial replicable functions of finite order with algebraic integer coefficients are genus zero modular functions. This answers a question that has been floating around the moonshine community for about 30 years.

Moonshine originated in the 1970s, when some mathematicians noticed apparent numerical coincidences between the theory of modular functions and the theory of finite simple groups. Most notable was McKay’s observation that 196884=196883+1, where the number on the left is the first nontrivial Fourier coefficient of the modular function j, which classifies complex elliptic curves, and the numbers on the right are the dimensions of the smallest irreducible representations of the largest sporadic finite simple group, called the monster. Modular functions and finite group theory were two areas of mathematics that were not previously thought to be deeply related, so this came as a bit of a surprise. Conway and Norton encoded the above equation together with other calculations by Thompson and themselves in the Monstrous Moonshine Conjecture, which was proved by Borcherds around 1992.

I was curious about the use of the word “moonshine” here, so I looked it up in the Oxford English Dictionary. There are essentially four definitions:

- Light from the moon, presumably reflected from the sun (1425)
- Appearance without substance, foolish talk (1468 – originally “moonshine in the water”)
- A base of rosewater and sugar, or a sweet pudding (1558 cookbook!)
- Smuggled or illegally distilled alcoholic liquor (1782)

The fourth and most recent definition seems to be the most commonly used among people I know. The second definition is what gets applied to the monster, and as far as I can tell, its use is confined to English people over 60. It seems to be most popularly known among scientists through a quote by Rutherford concerning the viability of atomic power.

I’ll give a brief explanation of monstrous moonshine, generalized moonshine, and my paper below the fold. There is a question at the bottom, so if you get tired, you should skip to that.

The story begins with the classification of finite simple groups. The use of the word “simple” here is a technical term that doesn’t mean that they are easy to understand. Rather, they are in some sense atomic, i.e., their internal structure is tightly bound together, and can’t be seen by taking quotients. The statement of the theorem is that any nontrivial finite simple group is isomorphic to (at least) one of:

- A finite cyclic group of prime order.
- An alternating group for n at least 5.
- A group of Lie type – there are 16 infinite families of these, coming from linear algebra over finite fields.
- A sporadic group – there are 26 of these, and most are constructed from symmetries of exceptional combinatorial structures.

The classification theorem was announced around 1981, but the consensus seems to be that the proof was finished in 2004 when Aschbacher and Smith published their massive two-volume book on quasi-thin groups.

One of the big mysteries of this classification is how the sporadic groups fit in. I have heard of some proposals floating around concerning algebraic structures that are in some sense simple almost-groups, and the hope is that one might simplify the proof of the classification by cataloguing these structures in a functorial way and picking out the ones that are actually simple groups. Unfortunately, I haven’t heard of significant progress in this direction. Jared Weinstein once asked me if some of them are exceptional algebraic groups over F_1, but I don’t think the question is well-defined yet.

Moonshine concerns the largest of these sporadic groups, called the monster. Fischer and Griess independently conjectured that it existed in 1973, and Griess gave a rather complicated construction in 1980. Before the monster was constructed, several facts about this group were already established: its order was known, several other sporadic groups were known to be contained as subquotients, and in 1978, Fischer, Livingstone, and Thorne computed the 194 by 194 character table. The first few irreducible representations have dimension 1, 196883, 21296876, and 842609326.

On the other side of moonshine lies the theory of modular functions. I gave an introduction in this post. Essentially, these are holomorphic functions on the complex upper half-plane that are invariant under large discrete subgroups of , and they often give nice invariants of diagrams of elliptic curves. For example, the quotient of the upper half plane by the group classifies ordered pairs of elliptic curves, equipped with a -isogeny (aka double cover homomorphism) between them. The function is invariant under this group of transformations, and therefore gives an invariant for these pairs. The normalizer in , is given by a semidirect product with the Fricke involution , and the resulting quotient space classifies unordered pairs of curves related by dual -isogenies. (Incidentally, this moduli problem comes up in Kapustin-Witten for B,C,F for some reason I cannot understand.)

The most important modular function is Dedekind’s j-function, which classifies elliptic curves. It is invariant under , and in particular, translation by integers, so it has a Fourier expansion. The easiest way to compute the expansion is to use modular forms of higher weight: . Writing the Fourier series in terms of , we see that the numerator has all nonnegative coefficients, since it is the generating function for lattice vectors in the product of three copies of the E_8 lattice. The reciprocal of the denominator also has nonnegative coefficients, since it is a shifted generating function for partitions into 24 buckets. therefore has all nonnegative coefficients, and one might ask if these coefficients count something interesting (other than some combination of buckets and lattices).

McKay’s observation was that these coefficients were relatively straightforward combinations of the dimensions of the irreducible representations of the monster. 196884 = 196883+1, 21493760 = 21296876 + 196883 + 1, and 864299970 = 842609326 + 21296876 + 2*196883 + 2. This suggests that there is a nice infinite dimensional graded representation of the monster, whose graded dimension is given by the Fourier expansion of j. The constant term 744 can be safely removed without destroying the invariance of the function.

McKay, Thompson, Conway, and Norton did some additional calculations, looking at the graded traces of nontrivial elements in these representations. An element in conjugacy class 2A yields a series that coincides with the unique -invariant function with a simple pole and no constant term. Similarly, an element in conjugacy class 2B yields , which is invariant under . After looking at all of the elements, Conway and Norton concluded that the series seemed to have some special properties in common:

- For any element g, the corresponding series is the Fourier expansion of some modular function invariant under a group of Möbius transformations that contains (and normalizes) , for N a multiple of |g| such that N divides 12|g|.
- The modular function is genus zero, i.e., the quotient of the upper half plane by the invariance group is analytically isomorphic to a complex line, possibly missing some points, and the function realizes one such isomorphism.
- The series satisfies replication formulas. This yields a complicated set of recurrence relations between the coefficients. The precise definition is a bit complicated, but it is given in section 4 of my paper. Norton showed that replicable functions are uniquely determined by their first 25 coefficients.

The monstrous moonshine conjecture asserts that there is a representation of the monster whose graded traces were the functions that they had enumerated, and when Conway and Norton formulated this conjecture, they suggested that replicability could be a key to solving it. This turned out to be true in the following sense: Using the graded representation of the monster (called the moonshine module) constructed by I. Frenkel, Lepowsky, and Meurman in 1988, Borcherds showed that the traces satisfied a strong form of replication called complete replicability, for which the functions are determined by the first seven coefficients, and he matched these coefficients with those found by Conway and Norton.

This final computational step in Borcherds’s proof was criticized by a few mathematicians as a conceptual gap, and about ten years ago, results of Kozlov, Cummins, and Gannon showed in a noncomputational way that completely replicable functions are either genus zero or have a highly degenerate form. One is still left with the original question of how non-complete replication fits in to this picture, and it turns out that this is also relevant to moonshine, because there is a generalization of the moonshine conjecture that doesn’t yield completely replicable functions.

Recall from above that the McKay-Thompson series for the conjugacy class 2A is . The centralizer of an element in conjugacy class 2A is isomorphic to 2.B, the nontrivial central extension of the baby monster sporadic group. If we examine the character table for 2.B, we find that the smallest irreducible representations have dimension 1, 4371, 96255, 96256, 1139374. Much like before, we find that 4372 = 4371+1, 96256=96256 or 96255+1, and 1240002 = 1139374 + 96255 + 4371 + 2. One might suspect that there is a nice graded representation of 2.B whose dimension is given by the McKay-Thompson series. If we examine traces of elements in these combinations of representations, we again find that they agree with the lowest order terms in the Fourier expansions of genus zero modular functions.

This sort of moonshine for subgroups of the monster was initiated in Conway and Norton’s 1979 moonshine paper, developed computationally in Queen’s 1980 thesis, and finally codified into a reasonably coherent form by Norton in 1987. The generalized moonshine conjecture asserts the existence of a function Z that takes a pair of commuting elements of the monster and returns a holomorphic function on the upper half plane, satisfying the following properties:

- Z is invariant under simultaneous conjugation on the inputs.
- Each output function is either a genus zero modular function or constant.
- If we fix g and vary h, the Fourier expansions of the functions are characters of a representation of some central extension of the centralizer of g in the monster.
- For any , i.e., they are constant multiples (the ambiguity was later revised to 24th roots of unity).
- if and only if g=h=1.

The last property prevents us from cheating and using trivial representations. Note that the case g=1 is just the original moonshine conjecture.

Much like the original moonshine conjecture, one of the hard parts here is coming up with a natural graded representation of a large finite group, and it turns out that here, as before, there are techniques motivated by physics that can help. Moonshine and generalized moonshine have a connection to conformal field theory through vertex operator algebras (but I will have difficulty explaining it precisely, since I don’t know what conformal field theory is). The moonshine module was originally constructed using the theory of vertex operators (which have their origin in physics in a way that I still don’t understand), and later this construction was refined to give a vertex operator algebra structure. Vertex operator algebras are said to encode much of a two dimensional conformal field theory, in the sense that the multiplication tells us how certain insertions behave as they approach each other. If we think of a quantum field as an operator-valued distribution on a manifold, it is not reasonable to define a multiplication everywhere, since distributions like the delta function don’t multiply well, but vertex operator algebras can be viewed as the next best thing.

Here is a vaguely geometric picture: suppose I have a really small disc D, and a vector bundle on it. D is so small that it is made of two points: the special point, which is closed and makes up the center of the disc, and the generic point, which is dense and open. In the language of algebraic geometry, D is Spec C[[z]], the closed point is Spec C, and the generic point is Spec C((z)). The vector bundle has special fiber V for some vector space, which will be our vertex algebra, and generic fiber V((z)), i.e., formal Laurent series with coefficients in V. Ordinary algebras have multiplication given by a map , but vertex operator algebras have multiplication given by , so multiplication takes two sections of the special fiber and produces a section of the generic fiber in a bilinear way. We also demand that this multiplication is almost commutative and associative, in the sense that multiplying three sections of the special fiber in three different ways (namely (AB)C, B(AC), and A(BC)) yields the same section of the generic fiber of the small polydisc D x D (the condition is actually a bit stronger than that). There are other axioms that are harder to motivate, coming from an action of the Virasoro algebra (which produces formal coordinate changes on the generic point).

Summing up, the moonshine module has a vertex operator algebra structure, and the monster is the automorphism group of this structure. Using this fact, together with some infinite-dimensional algebraic manipulations motivated by physics, Borcherds proved the moonshine conjecture. One might hope that generalized moonshine also can be attacked this way, but the representations of centralizers need a physical interpretation. This interpretation was produced by Dixon, Ginsparg, and Harvey in 1988, not long after the generalized conjecture was published. The representations are called twisted Hilbert spaces of an orbifold conformal field theory, and the paper has some nice drawings of elliptic curves as identification spaces of parallelograms, with edges labeled by commuting elements of the monster. You might ask, what do these pictures have to do with the conjecture?

The answer is that they arise from monodromy along a homology basis of the elliptic curve. While the partition function of an ordinary 2D conformal field theory will assign a value to any surface in a way that is invariant under conformal symmetries, the partition function of G-orbifold conformal field theory will assign a value to any G-cover of a surface, also in a conformally invariant way. The case of interest to us is an elliptic curve, for which a G-cover is a complex manifold (not necessarily connected) equipped with a faithful G-action and a map to the curve, such that the preimage of any point is a full G-orbit. This condition implies the cover is a disjoint union of elliptic curves. We can classify these G-covers by rigidifying the problem. We choose an oriented homology basis of the underlying torus, or equivalently, we view the elliptic curve as a quotient of the complex line by a lattice, and choose an oriented basis of the lattice (this is a constraint on the angle between the basis elements). The G-cover is then uniquely defined by what happens to a fixed preimage of zero as we travel along paths that represent those elements of homology (equivalently, paths to the lattice generators). This gives us a pair of elements of G, and they must commute, since we can travel along either side of a parallelogram spanned by the lattice generators, and get the same outcome. This pair of commuting elements is only defined up to conjugation in G, since we can change the choice of preimage of zero. So far, we have classified G-covers of an elliptic curve equipped with an oriented homology basis. Elliptic curves equipped with an oriented homology basis are parametrized by points in the complex upper half plane, so if we have a partition function for G-covers of elliptic curves, it is a function that takes monodromy along an oriented bases (given by a conjugacy class of commuting elements of G), and produces a function on the upper half-plane, in a way that is invariant under change of basis. Aside from the constant multiplier (called a phase anomaly) this is precisely the invariance demanded by the generalized moonshine conjecture.

We still need to understand what twisted Hilbert spaces are in the language of vertex operator algebras. They are irreducible twisted modules. For ordinary algebras, a module structure is given by a map , and for vertex operator algebras, it is a map (and they both need to satisfy some kind of compatibility with multiplication). Twisted module structures can occur here because the generic point (i.e., the punctured disc) is not simply connected. In particular, the connected finite (étale) covers of Spec C((z)) have the form Spec C((t)), where t^N = z. The twisted module structure itself is given by a map , and it has to satisfy a compatibility with multiplication in V together with an additional monodromy condition. The form of this monodromy condition is a source of some debate, since g-twisted modules for some authors are the same as -twisted modules for others. If we take the Fourier expansion of to be the trace of h acting on the g-twisted module of the moonshine module, then the generalized moonshine conjecture actually tells us the “correct” definition: If , then the power series vm lies in .

The last piece of the connection to physics is the question of why the Fourier expansion of a torus partition function is given by the dimension (or more generally the trace of an automorphism) of a graded vector space. I cannot articulate a good answer to this now, but I can wave my hands and mumble phrases like “conformal blocks” and “degenerating curves”. The moduli space of elliptic curves is not compact, but if we add a point at infinity to parametrize a nodal cubic curve, we get a compact space parametrizing “generalized elliptic curves”. G-covers of the cubic curve are disjoint unions of Néron n-gons, which are just n copies of the projective line arranged in a cycle, intersecting transversely. Restricting the map to the smooth locus gives a cover of the multiplicative group, such that the degree on each connected component is equal to the order of g (barring phase anomalies). From a physics standpoint, the normalization map should push together dual irreducible twisted modules on the projective line, and an expansion along the formal deformation should yield characters according to a grading arising from the Virasoro action.

Now that we have objects that we want, we still have the problem of proving the conjecture. In 1997, Dong, Li, and Mason proved that it holds (up to a constant multiple) for the case that g and h generate a cyclic group. In particular, they showed that the moonshine module has a unique irreducible g-twisted module for each g, and its graded dimension is some constant multiple of what we expect. In 2003, Gerald Hoehn showed that it holds when g lies in the conjugacy class 2A. The rest of the cases are still wide open.

My own involvement in this problem started in 2005, when Borcherds suggested a strategy for solving it that was both brilliant and doomed to failure (for now). In his proof of the moonshine conjecture, he constructs a Lie algebra with an action of the monster, and forms twisted denominator formulas with respect to that action that yield the complete replicability of characters that he needs. It turns out that you can construct the monster Lie algebra complete with the monster action by reverse-engineering from the twisted denominator formulas, and you don’t need to bother with the vertex operator algebra or the no-ghost theorem. The idea was to do the analogous process for all conjugacy classes of pairs of commuting elements of the monster, and here some problems crept up. No one has classified the conjugacy classes of commuting pairs, and I think there are over 10000 of them. Also, the character tables of centralizers and their central extensions are not all known. I’ve heard rumors that these problems are “too hard” for our current technology, although Moore’s law may eventually rectify that.

Despite this, my interest was piqued, and I set about constructing some Lie algebras by other methods. If I can hammer down the last details, they should appear in some sequels to the current paper, which is centered around the question of what one can do once one has such a Lie algebra. The answer is that the characters are genus zero functions, and this actually subsumes the genus zero question for replicable functions (I only realized this last month when I tried and failed to prove that one of my hypotheses was equivalent to complete replicability). The machinery for the proof came about in a rather circuitous way. Jacob Lurie came by Berkeley to give a talk at the topology seminar in fall 2005, and at tea I asked him if there was an elliptic cohomology analogue of the Atiyah-Segal exponential formula in K-theory. He said he wasn’t sure, but it reminded him of some work of Rezk on the topological logarithm. Later, he emailed me with some formulas involving equivariant Hecke operators, along with a lot of other topological information that I won’t explain here. These equivariant Hecke operators ended up being the key to the genus zero question, but I also needed some ingredients from Kozlov’s master’s thesis and a paper by Cummins and Gannon to put it together with modular equation theory. Kozlov’s master’s thesis was both really useful and impressively difficult to find, and I only managed to obtain a copy in the summer after I graduated.

The question of the day is: how do (equivariant) Hecke operators arise in (orbifold) conformal field theory? They came up in Witten’s recent 3D gravity work, but not in a form that I can understand.

Since I saw your paper when it first came up on the arxiv I’ve been meaning to give you mad props for citing parts II and V, but not parts III and IV. That is style.

That was a great post!

As there are screen-reading-averse people interested, do you think in the future it would be possible to have a (link to a) printer friendly version? This time I solved the problem by cutting, pasting and then editing the result as a TeX file, but I’m sure there are smarter ways.

Suggestions by other readers are gratefully welcome.

And all this time I thought “monstrous moonshine” was a reference to liquor! (I’m a few years younger than you, I think.) It almost made sense, too, in that coincidences such as 196884 = 196883 + 1 are the sort of thing many would dismiss as “illegal” to look at in serious mathematical work.

scott, thanks! probably the best post i read in a long,long time.

@estraven : just click ‘print’ from your browser. the ssb-print-version is pretty neat.

About exceptional groups being Lie groups over F_1, it doesn’t work (at least) if the Lie groups are the usual ones and points over F_1 are taken to mean the corresponding Weyl group. Even if those are usually close to simple for simple Lie groups, they do not involve the exceptional simple finite groups (e.g., for E_8, one gets, up to 2 composition factor of order 2, the orthogonal group of dimension 8 over F_2).

Oh no, I forgot to say the order of the monster! As you may know, this sort of omission is a grave social error in any discussion of moonshine. The monster has about elements, and this is surprisingly close to the number of protons in Jupiter (the link gives a decent upper bound).

Fantastic post, Scott!

Emmanuel,

I think Jared was looking for something more bizarre and twisted, along the lines of the Ree and Suzuki series. I’m pretty sure he was aware of the Weyl group interpretation of Chevalley groups.

I don’t like to publicly speculate about nonexistent theories, but I feel like you’re right that passing to F_1 may be the wrong direction, since it seems to involve degenerating from vector spaces with forms to structured sets. The most natural presentation of the monster to date is given by a vertex operator algebra, which has more structure than a typical vector space, and I think this suggests that we should be looking to field theories and objects of a higher-categorical nature to develop sporadic groups. Haynes Miller has suggested that I look into p-compact groups, but I’d like to get some papers out the door before taking a flying leap into homotopy land.

I just remembered something that forces me to recant some of my philosophy. There is more than one very nice reflection group presentation of the bimonster, which is the wreath product of the monster with . The first one is the (also called ) presentation conjectured by Conway and proved by Ivanov and Norton. This comes from a Coxeter diagram with 16 generators arranged in a symmetric Y shape (glue some A5s to a D4 on the leaves). There is an additional relation that sets a certain 10th power of a word to be trivial. Conway and Simons showed that one can add 10 more to get 26 generators, connected according to the incidence graph of the projective plane of order 3, and the quotient by some additional relations arising from 12-cycles in the diagram also yields the bimonster. Tathagata Basak and Daniel Allcock have some really neat work relating this to complex hyperbolic reflection groups, and a conjectural monster manifold.

The bimonster seems to come up in a physical context as well. Vertex operator algebras describe only one half of a conformal field theory, associated to holomorphic fields. If we add an antiholomorphic copy of the moonshine module to the holomorphic one, there is an action by two copies of the monster, together with a symmetry that switches them, and this yields the bimonster. It also shows up in a paper by Craps, Gabardiel, and Harvey called Monstrous branes, which sounds like the title of a bad horror movie.

Atiyah-Segal exponential formula in K-theoryWhat formula are you referring to here?

As for moonshine = liquor, the usual history of this conjecture includes Andy Ogg’s offer of a bottle of Jack Daniels to whomever could explain why the primes with a certain “genus zero” property were exactly the ones that divide |Monster|.

The formula is: . The more familiar form uses the symmetric algebra on the left, and removes the minus sign on the right. I remember seeing a paper that referred to it by this name before, but I don’t remember which paper, and Google doesn’t seem to help. People familiar with the splitting principle have a tendency to express surprise that it has a name at all. For more general complex-oriented cohomology theories, I think the right side comes from taking a sum over isogenies of the corresponding formal group, but it isn’t clear what you’re supposed to get on the left. If you think of elliptic cohomology as some kind of shadow of conformal field theory, then the left side in this case seems to come from BRST cohomology.

I thought it would be appropriate to use John Conway’s narrative, since he came up with the name “monstrous moonshine”. As far as I know, the association between the bottle of Jack Daniels and moonshine came somewhat later. Richard said that he was offered the bottle, but he declined because he had only answered half of the question. Also, I don’t think he likes Jack.

Why isn’t the post showing up for me? I can only see the comments.

Scott,

The formula you give is (pretty nearly) the original definition of the Adams operations (the \psi^n) given by Adams; I would think of it as Adams’ formula. There is an old paper by Atiyah and Segal on “Exponential isomorphisms for \lambda-rings”, which may be where this terminology comes from, but their exponential map is given by a different (and very ad hoc) formula.

Charles,

Thanks for explaining that. I think I will remove that reference in the future.

Thanks everyone for the compliments (including Noah). Sometimes I wonder if the math I like is at all interesting to others, and this sort of response is comforting.

I’m having the same problem as Anon above. I can’t read this, by all accounts, fantastic post — I can only read the comments. Has the post been eaten?

I can still read it here. My wild guess is that the wordpress database occasionally hiccups, or perhaps it times out on long posts when it’s under heavy load.

Hmm, it’s vanished for me, too.

Strange… I can see it when I use my office computer, but not from home.

I can see it via the RSS feed, but not via the blog itself. Are there some strange permissions set?

Adblock Plus will remove the post. Turn it off for this site.

Soren, thanks. That works!