What makes the Monster Lie Algebra special?

This is a post I’d been meaning to write for several years, but I was finally prompted to action after talking to some confused physicists. The Monster Lie Algebra, as a Lie algebra, has very little structure – it (or rather, its positive subalgebra) is quite close to being free on countably infinitely many generators. In addition to its Lie algebra structure, it has a faithful action of the monster simple group by Lie algebra automorphisms. However, the bare fact that the monster acts faithfully on the Lie algebra by diagram automorphisms is not very interesting: the almost-freeness means that the diagram automorphism group is more or less the direct product of a sequence of general linear groups of unbounded rank, and the monster embeds in any such group very easily.

The first interesting property of the Monster Lie Algebra has nothing to do with the monster simple group. Instead, the particular arrangement of generators illustrates a remarkable property of the modular J-function.

The more impressive property is a *particular* action of the monster that arises functorially from a string-theoretic construction of the Lie algebra. This action is useful in Borcherds’s proof of the Monstrous Moonshine conjecture, as I mentioned near the end of a previous post, and this usefulness is because the action satisfies a strong compatibility condition that relates the module structures of different root spaces.

Continue reading

Hall algebras are Grothendieck groups

I’ve been attending a seminar/class run by Nick Proudfoot preparing for his workshop this summer on canonical bases. In conversations with Nick and graduate students, and there’s been some confusion about the relationship between Hall algebras and Grothendieck groups. Obviously, if you read the definitions you’ll see they are not the same, but the idea seems to be floating around that there is something going on with them. At some point, I decided writing a blog post on the subject would be a good idea. What are Hall algebras?

The Hall algebra of a category is the Grothendieck group of constructible sheaves/perverse sheaves on the moduli stack of objects in the category. The Hall algebra is an algebra because the constructible derived category of the moduli stack of objects in abelian category is monoidal in a canonical way.

To my mind, this is what makes Hall algebras worth studying, yet it’s oddly ignored in the literature on them (as far as I know; people should feel free to correct me). For example, it’s never mentioned in Schiffmann’s Lectures on Hall Algebras, the closest thing the subject has to a standard reference. Continue reading

Representation theory course

Well, like David, I am teaching a course this semester and writing up notes.

My course is on representation theory. More specifically, I hope to cover the basics of the representation theory of complex reductive groups, including the Borel-Weil theorem. In my class, I have started from the theory of compact groups, for two reasons. First, that is the way, I learned the subject from my advisor Allen during a couple of great courses. Second, I am following up on a course last semester taught by Eckhard Meinrenken on compact groups.

Feel free to take a look at the notes on the course webpage and give me any feedback.

Very soon, I will reach the difficult task of explaining complexification of compact groups. As I complained about in my previous post, I don’t feel that this topic is covered properly in any source, so I am bit struggling with it. Anyway, the answers to that post did help me out, so we will see what happens.

Passage from compact Lie groups to complex reductive groups

Once again, I’m preparing to teach a class and needing some advice concerning an important point. I’m teaching a course of representation theory as a followup to an excellent course on compact Lie groups, taught this semester by Eckhard Meinrenken. In my class, I would like to explain transition from compact Lie groups to complex reductive groups, as a first step towards the Borel-Weil theorem.

A priori, compact connected Lie groups and complex reductive groups, seem to have little in common and live in different worlds. However, there is a 1-1 correspondence between these objects — for example U(n) and GL_n(\mathbb{C}) are related by this correspondence. Surprisingly, it is not that easy to realize this correspondence.

Let us imagine that we start with a compact connected Lie group K and want to find the corresponding complex algebraic group G. I will call this process complexification.

One approach to complexification is to first show that K is in fact the real points of a real reductive algebraic group. For any particular K this is obvious — for example S^1 = U(1) is described by the equation x^2 + y^2 = 1. But one might wonder how to prove this without invoking the classification of compact Lie groups. I believe that one way to do this is to consider the category of smooth finite-dimensional representation of the group and then applying a Tannakian reconstruction to produce an algebraic group. This is a pretty argument, but perhaps not the best one to explain in a first course. A slightly more explicit version would be to simply define G to be Spec (\oplus_{V} V \otimes V^*) where V ranges over the irreducible complex representations of K (the Hopf algebra structure here is slightly subtle).

In fact, not only is every compact Lie group real algebraic, but every smooth map of compact Lie groups is actually algebraic. So the
the category of compact Lie groups embeds into the category of real algebraic groups. For a precise statement along these lines, see this very well written
MO answer by BCnrd.

A different approach to complexification is pursued in
Allen Knutson’s notes and in Sepanski’s book. Here the complexification of K is defined to be any G such that there is an embedding K \subset G(\mathbb{C}) , such that on Lie algebras \mathfrak{g} = \mathfrak{k} \otimes_{\mathbb{R}} \mathbb{C} . (Actually, this is Knutson’s definition, in Sepanski’s definition we first embed K into U(n) .) This definition is more hands-on, but it is not very obvious why such G is unique, without some structural theorems describing the different groups G with Lie algebra \mathfrak{g} .

At the moment, I don’t have any definite opinion on which approach is more mathematically/pedagogically sound. I just wanted to point out something which I have accepted all my mathematical life, but which is still somewhat mysterious to me. Can anyone suggest any more a priori reasons for complexification?

A (partial) explanation of the fundamental lemma and Ngo’s proof

I would like to take Ben up on his challenge (especially since he seems to have solved the problem that I’ve been working on for the past four years) and try to explain something about the Fundamental Lemma and Ngo’s proof.  In doing so, I am aided by a two expository talks I’ve been to on the subject — by Laumon last year and by Arthur this week.

Before I begin, I should say that I am not an expert in this subject, so please don’t take what I write here too seriously and feel free to correct me in the comments.  Fortunately for me, even though the Fundamental Lemma is a statement about p-adic harmonic analysis, its proof involves objects that are much more familiar to me (and to Ben).  As we shall see, it involves understanding the summands occurring in a particular application of the decomposition theorem in perverse sheaves and then applying trace of Frobenius (stay tuned until the end for that!).

First of all I should begin with the notion of “endoscopy”.  Let G, G' be two reductive groups and let \hat{G}, \hat{G}' be there Langlands duals.  Then G' is called an endoscopic group for G if \hat{G}' is the fixed point subgroup of an automorphism of \hat{G} .  A good example of this is to take G = GL_{2n} , G' = SO_{2n+1} .  At first glance these groups having nothing to do with each other, but you can see they are endoscopic since their dual groups are GL_{2n} and Sp_{2n} and we have Sp_{2n} \hookrightarrow GL_{2n} .

As part of a more general conjecture called Langlands functoriality, we would like to relate the automorphic representations of G to the automorphic representations of all possible endoscopic groups G' .  Ngo’s proof of the Fundamental Lemma completes the proof of this relationship.

Continue reading

A hunka hunka burnin’ knot homology

One of the conundra of mathematics in the age of the internet is when to start talking about your results. Do you wait until a convenient chance to talk at a conference? Wait until the paper is ready to be submitted to the arXiv (not to mention the question of when things are ready for the arXiv)? Until your paper is accepted? Or just until you’re confident you’ve disposed of any major errors in your proofs?

This line is particularly hard to walk when you think the result in question is very exciting. On one hand, obviously you are excited yourself, and want to tell people your exciting results (not to mention any worries you might have about being scooped); on the other, the embarrassment of making a mistake is roughly proportional to the attention that a result will grab.

At the moment, as you may have guessed, this is not just theoretical musing on my part. Rather, I’ve been working on-and-off for the last year, but most intensely over the last couple of months, on a paper which I think will be rather exciting (of course, I could be wrong). Continue reading

SF&PA: Subfactors = finite dimensional simple algebras

Since my next post on Scott’s talk concerns the construction of a new subfactor, I wanted to give another attempt at explaining what a subfactor is. In particular, a subfactor is just a finite-dimensional simple algebra over C!

Now, I know what you’re thinking, doesn’t Artin-Wedderburn say that finite dimensional algebras over C are just matrix algebras? Yes, but those are just the finite dimensional algebras in the category of vector spaces! What if you had some other C-linear tensor category and a finite dimensional simple algebra object in that category?

Let me start with an example (very closely related to Scott Carnahan’s pirate post).
Continue reading

Generalized moonshine I: Genus zero functions

This is a plug for my first arXiv preprint, 0812.3440. It didn’t really exist as an independent entity until about a month ago, when I got a little frustrated writing a larger paper and decided to package some results separately. It is the first in a series of n (where n is about five right now), attacking the generalized moonshine conjecture. Perhaps the most significant result is that nontrivial replicable functions of finite order with algebraic integer coefficients are genus zero modular functions. This answers a question that has been floating around the moonshine community for about 30 years.

Moonshine originated in the 1970s, when some mathematicians noticed apparent numerical coincidences between the theory of modular functions and the theory of finite simple groups. Most notable was McKay’s observation that 196884=196883+1, where the number on the left is the first nontrivial Fourier coefficient of the modular function j, which classifies complex elliptic curves, and the numbers on the right are the dimensions of the smallest irreducible representations of the largest sporadic finite simple group, called the monster. Modular functions and finite group theory were two areas of mathematics that were not previously thought to be deeply related, so this came as a bit of a surprise. Conway and Norton encoded the above equation together with other calculations by Thompson and themselves in the Monstrous Moonshine Conjecture, which was proved by Borcherds around 1992.

I was curious about the use of the word “moonshine” here, so I looked it up in the Oxford English Dictionary. There are essentially four definitions:

  1. Light from the moon, presumably reflected from the sun (1425)
  2. Appearance without substance, foolish talk (1468 – originally “moonshine in the water”)
  3. A base of rosewater and sugar, or a sweet pudding (1558 cookbook!)
  4. Smuggled or illegally distilled alcoholic liquor (1782)

The fourth and most recent definition seems to be the most commonly used among people I know. The second definition is what gets applied to the monster, and as far as I can tell, its use is confined to English people over 60. It seems to be most popularly known among scientists through a quote by Rutherford concerning the viability of atomic power.

I’ll give a brief explanation of monstrous moonshine, generalized moonshine, and my paper below the fold. There is a question at the bottom, so if you get tired, you should skip to that.

Continue reading

Request: Quivers and Roots

Consider two finite dimensional vector spaces A and B and a linear map \phi between them. Then we can decompose A as K \oplus R where K is the kernel of \phi and R is any subspace transverse to K. Similarly, we can write B as I \oplus C where I is the image of \phi. So we can write \phi as the direct sum of K \to 0, the identity map from R \to I and 0 \to C. At the cost of making some very arbitrary choices, we may simplify even more and say that we can express \phi as the sum of three types of maps: 0 \to k, the identity map k \to k and k \to 0 (where k is our ground field.)

Now, suppose that we have two maps, \phi and \psi from A to B. We’ll start with the case that A and B have the same dimension. If \phi is bijective, then we can choose bases for A and B so that \phi is the identity. Once we have done that, we still have some freedom to change bases further. Assuming that k is algebraically closed, we can use this freedom to put \psi into Jordan normal form. In other words, we can choose bases such that (\phi,\psi) are direct sums of maps like

\left( \left( \begin{smallmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{smallmatrix} \right), \left( \begin{smallmatrix} \alpha & 1 & 0 \\ 0 & \alpha & 1 \\ 0 & 0 & \alpha \end{smallmatrix} \right) \right).

(Here several different values \alpha may occur in the various summands, and of course, the matrices can be sizes other than 3 \times 3.) If we don’t assume that \phi is bijective (and if we want to allow A and B to have different dimensions) we get a few more cases. But the basic picture is not much worse: in addition to the summands above, we also need to consider the maps

\left( \left( \begin{smallmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{smallmatrix} \right), \left( \begin{smallmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \end{smallmatrix} \right) \right)

(for various sizes n \times (n+1), not just 2 \times 3) and the transpose of these. These three possibilities, and their direct sums, describe all pairs (\phi, \psi) up to isomorphism.

Now, consider the case of three maps. As the dimensions of A and B grow, so do the number of parameters necessary to describe the possible cases. Moreover, almost all cases can not be decomposed as direct sums. More precisely, as long as \mathrm{dim\ } A/\mathrm{dim\ }B is between (3+\sqrt{5})/2 and (3-\sqrt{5})/2, the maps which can be expressed as direct sums of simpler maps have measure zero in the \mathrm{Hom}(A,B). (Where did that number (3+\sqrt{5})/2 come from? Stay tuned!) In the opinion of experts, there will probably never be any good classification of all triples of maps.

The subject of quivers was invented to systemize this sort of analysis. It’s become a very large subject, so I can’t hope to summarize it in one blog post. But I think it is fair to say that anyone who wants to think about quivers needs to start by learning the connection to root systems. So that’s what I’ll discuss here.

Continue reading