One of the famous theorems that tend to crop up in undergraduate algebra classes is the Artin-Wedderburn theorem, which says

Theorem. *Any semi-simple ring is a product of matrix algebras over division algebras. In particular, if is an algebraically closed field, **any semi-simple -algebra **is a product of matrix algebras over . *

(We say that an algebra is semi-simple if any submodule of any -module has a complement, that is, if every short exact sequence of -modules splits).

Now, looking at this theorem, one might imagine that we now know a lot about finite-dimensional algebra. After all, there are only two kinds of finite-dimensional algebras, semi-simple and non-semi-simple, and we understand one of those halves quite well. Better yet semi-simplicity is an “open” condition. If we think about the set of associative products a finite dimensional vector space could have, the set of such products which are semi-simple is an open set in the Zariski topology, which those of us who like algebraic geometry know means it is pretty darn big, provided it is non-empty (which is it is, since every vector space has a semi-simple product as the sum of a bunch of copies of the field).

But, of course, this is ridiculous. To borrow a metaphor, dividing algebras into semi-simple and not-semi-simple is like dividing the world into bananas and non-bananas. Each finite-dimensional algebra has a unique semi-simple quotient, dividing out by the Jacobson radical (the ideal of elements which act trivially on all simple representations), but the number of different ways of attaching a Jacobson radical to a semi-simple representation is totally intractable as a classification problem.

Interestingly, this sort of phenomenon shows up in several places in the world of algebra: any connected linear algebraic group is an extension of a semi-simple group (which we have a pretty good handle on) and a unipotent group (which it is completely hopeless to try to classify) and any finite group is an extension of an almost semi-simple finite group (which are about as hard to understand as the classification of finite simple groups, which is pretty hard, but not entirely hopeless) by a solvable group (which you can pretty conclusively forget about classifying right now).

Now, just because the general classification is hopeless doesn’t mean that we should give up entirely on finite dimensional algebras. We just have to reduce our expectations a bit, and specialize. For reasons that I don’t have time to get to in this post, I’d like to restrict to the case where is graded by non-negative integers, with semi-simple, and generated over by .

Now, there’s no way for such an algebra to be interesting and semi-simple. In fact, the ideal of positively graded elements is the Jacobson radical of .

Instead, we can identify a class of these algebras which “as semi-simple as possible” called Koszul algebras (the correct pronunciation of “Koszul” is a hotly debated topic, made more difficult by the fact that the mathematician in question was French, though this is clearly not a French name. It seems to be some kind of variant of the Polish word for “shirt,” and my limited knowledge of Polish pronunciation suggests it should be pronounced “KOSHool,” though “kohZOOL” seems to be more common amongst English speakers. Oh well, I’m a little anal like that. I also insist on pronouncing “Noether” as “nö-tehr,” not “nuther”).

There are a surprising number of different ways to think about the condition of Koszulity. To me, the most natural is to think about is Ext’s between simples. If are simple -modules, then we can consider . This module inherits a grading from the fact that we can take a homogeneous free resolution of , and calculate Ext using this.

Definition. *We call Koszul if is concentrated in degree in this grading. That is, if is the direct sum of all simples, then the two natural gradings on the algebra coincide.*

We call this algebra the **Koszul dual **of . Each graded -module has a corresponding dual module . Call **limpid** (I absolutely refuse to overload “clean” or “pure” any further) if the two natural gradings on this module coincide.

Now, what is nice about Koszul algebras (or more generally, Koszul categories, the representation categories of Koszul algebras)? Well, first of all, they appear in a lot of natural places. For example, virtually every flavor of category O you can think of is Koszul, and there are some very interesting dualities between them (for eample, a regular block of category O is self-dual) as is discussed in a paper of Beilinson, Ginzburg and Soergel called “Koszul duality patterns in representation theory” which also serves as a good introduction to Koszul algebras in general. They also appear in topology (for example, in the work of Goresky, MacPherson and Kottwitz on equivariant cohomology) and combinatorics (Vic Reiner gives a nice talk about this here, featuring some exciting confusion about pronunciation). Much of the BGS paper mentioned above is dedicated to showing that algebras associated to certain sorts of geometry (perverse sheaves on nice sorts of spaces) are always Koszul (this explains the case of category O), and some of my most recent research has been related to Koszul rings showing up in a somewhat related, but also differently flavored context.

But, of course, that’s not enough to really make the notion interesting unless we can say something about the structure of Koszul rings, and about their categories of representations.

Of course, what’s particularly nice about semi-simple algebras is that it’s really easy to calculate between modules; it’s zero unless . For Koszul algebras, things aren’t so lucky, but Ext’s are still controlled by a “nice” algebra.

Theorem. *If are graded limpid -modules, then there is an isomorphism preversing the grading arising from the grading on the modules, and sending the homological grading to the sum of the former and the latter. In particular, with the natural grading corresponds to the “diagonal” subalgebra of . *

Underlying this is an equivalence of derived categories, that lets us give a similar description for all modules. So, we can always turn Ext computations one side to ones that may be easier on the other (sometimes we can turn them into computations of just plain old maps).

For those of you who like -algebras (by which I mean Mikael Johansson), this equivalence of categories has a very nice interpretation.

Theorem. *The structure on **is formal.*

*Proof.* All the maps of the structure have to preserve the grading coming from the grading on the projective resolution of . But that’s the same as the homological grading, so all higher products are trivial.

I have nothing interesting to say about Koszul algebras or Koszul duality, but I can say that the “o” in Polish is roughly the same as a long o in English, but faster and deeper-sounding than in some words in some American English dialects. It’s like the “o” in Roman. And yes, “sz” in Polish is like “sh” in English. (Strictly speaking there are two “sh” sounds, written “sz” and “ś”. The former is hard while the latter is soft, I am told, but I can’t hear the difference between them.)

Hmm, several sources online and my copy of “The Rough Guide to Poland” (the only paper source of information on Polish life in my apartment) say that the “o” in Polish is always short (like in “lot” or “fodder”), never long (like in “boat”). Perhaps it’s a dialectal difference?

“To borrow a metaphor, dividing algebras into semi-simple and not-semi-simple is like dividing the world into bananas and non-bananas.”

Stanislaw Ulam said that studying nonlinear science (that is, the part of science that can’t be modeled by linear equations) is like studying non-elephant animals.

The problem is that calling a vowel “long” in English means more than one thing. It means both that the vowel is longer and deeper. The Polish “o” is short but deep. Try to say “lot” with the usual fast cadence but with the pitch closer to “boat”. (Not exactly the same though.)

Or, better yet, listen to one of the ubiquitous Russian mathematicians. A Russian “o”, e.g. in “Khovanov”, is not really all that different from Polish.

I’ve heard of Koszul duality from a slightly different perspective, having to do with operads. There is a fiber sequence Lie -> Assoc -> Comm, which basically means you can forget most of the multiplication in an associative algebra, retaining only commutators, and this gives you a canonical Lie algebra structure, and the Lie bracket vanishes if and only if the associative algebra was commutative. Koszul duality turns the sequence around, by which I mean the Ext construction makes Lie algebras from commutative ones and vice versa. Unfortunately, I haven’t actually seen this used, and I’m curious about what sort of applications it has.

There are more people than me out there who like A∞. And I was interested in Koszul duality loooong before I started poking at A∞-algebras.

That said – nice post. I really need to sit down and think more about these things again at some point.

Now, last time I was in Stockholm, Alexander Berglund had quite a few really interesting things to say about how A∞-constructions and Koszul duality as well as some bar-type constructions are all really the same thing, and how this can be leveraged to actually do stuff. I think he’s going to write things up – I’ll make sure to post once I know what’s happening and when the preprints crop up.

Stockholm is a hotspot in general for both Koszul-type things and for operads nowadays. The faculty have been involved in using Koszul duality since the days when the discussion was whether to call it Priddy or beautiful (or possibly marvelous – there’s a great story about Priddy visiting Stockholm that I cannot remember just now); and since Sergei Merkulov moved there, there’s a minor industry growing there on proving things about Koszul duality on various operad-like structures.

And at some point, I really need to get myself back into this corner of the world. I’m starting to miss it now that I read this blogpost. *sniff*

(p.s. name change will be percolating through my cached forms all through the blogosphere – since late august, it’s

Mikael Vejdemo Johanssonand notMikael Johansson:)Ben, that’s an arresting first sentence! I did as much algebra as possible when I was an undergraduate, and I thought I was at a university that went faster than most. But I was never taught:

the definition of semi-simple

the definition of matrix algebra, or for that matter

the definition of algebra, and most certainly not

the Artin-Wedderburn theorem!

In fact, I also did some algebra options when I was a masters student, and I didn’t meet the A-W theorem there either. Did everyone else do this when they were an undergraduate??

Tom, I don’t think it’s the norm in American universities to treat Artin-Wedderburn in the undergraduate curriculum, except maybe in certain Ivy League schools (e.g., Princeton). But the notions of ‘algebra’ and ‘matrix algebra’ — sure; a lot of undergraduate curricula would include that.

On the other hand, not as many schools here in the States would give the in-depth exposure to category theory that you were treated to at Cambridge. So undoubtedly it’s partly a cultural thing. (Categorists don’t get a whole lot of respect here on this side of the pond.)

I’m confused about some contradictory-seeming statements you made up above. On the one hand, you said that the class of semi-simple algebras is in some natural sense a Zariski open set in the space of all algebras. But on the other hand, each semi-simple algebra is the quotient of many non-semi-simple algebras by different Jacobson radicals. The first seems to suggest that most algebras are semi-simple (apart from some special cases), while the second seems to suggest that most are non-semi-simple.

So which is it?

Or is this just like the problem we get comparing Baire category and Lebesgue measure – you can partition the reals into a meager set and a set of measure zero, so that on one notion, one of them contains “most” of the reals, while on the other notion the other one contains “most” of the reals.

Of course, practically speaking, it doesn’t matter which side has most in a natural sense, if the ones that arise naturally tend to be on the other side. (Like that theorem saying that for all but measure 0 many reals, the geometric mean of the terms in the continued fractions converges to a specific value – but no one has constructed a real for which this is actually true!)

This seems the appropriate place to note a recent preprint.

Tom: I saw it when I was an undergraduate, but in a graduate-level algebra class. I never took the undergraduate version, so I don’t know if they covered it there.

Kenny, I’m pretty sure that the way to think about is that most algebras are semisimple, but most isomorphism classes of algebras are non-semisimple. That is to say, semisimple algebras live in nice large families, while non-semisimple algebras live in smaller families, but there are more such families. The point is that if you fiddle a little bit with a semisimple algebra it’s got probability 1 of staying not only semisimple, but actually in the same isomorphism class! Whereas if you fiddle a little bit with a nonsemisimple algebra you’ve got probability 1 of leaving that isomorphism class, but this leaves room for more such weird points.

Bjorn Poonen has a paper making this intuition more precise for commutative algebras.

Kenny, here is a simple example of what Noah is talking about. Look at triples of vectors in R^3. This space can be thought of as (R^3)^3=R^9 and hence given a topology. Then “most triples of vectors” are linearly independent, in the sense that the linearly independent triples form an open dense subset of this space.

Now consider triples of vectors

up to isomorphism. Then all the linearly independent triples form a single class. (Any basis can be taken to any other by a linear map.) On the other hand, the isomorphism classes of linearly dependent triples (other than (0,0,0)) form a surface homeomorphic to RP^2. So the moduli space of dependent triples has higher dimension than the moduli space of independent ones.The way I like to think of it is: Almost all algebras are semisimple, but semisimple algebras are all alike while every non-semisimple triple is non-semisimple in its own way.

OK, that makes more sense. Is the idea basically that when we go about defining products on a vector space, we get the same semi-simple algebra in too many ways, and the non-semi-simple ones only if we sort of aim things right?

There’s a similar fact about the construction of infinite graphs. There are of course uncountably many graphs on a countable set of vertices (I’m pretty sure this is true even up to isomorphism – certainly there are infinitely many isomorphism classes), but if you fix some probability 0<p<1 for each edge of being present in the graph, then one isomorphism class has probability 1 and all others together have probability 0. (The proof of probability 1 isomorphism for two randomly generated graphs is actually basically the same as the proof of isomorphism for two algebraically closed fields with the same uncountable cardinality. Model theorists love this kind of thing.)

Here’s another example: Look at the circle acting on the sphere by rotations. Most points are not fixed under this action, but most orbits are fixed points.

In fact, all these examples are really the same.

I thought there were only two fixed points, and a line segment’s worth of orbits (parametrized by latitude), for rotations of a sphere.

Oh. I think by “circle” you mean nonzero complex numbers acting on P^1.

Some fall-back-morning thoughts (although I still have nothing interesting to say about Koszulity):

Some of the comments say that it sounds absurd to frame a topic as non-semisimple algebras or non-linear PDEs, or for that matter non-abelian groups. One analogy is to the study of non-elephant animals. But a more interesting analogy is to the study of non-human animals! After all, that is what a veterinarian does. If for whatever reason a special case X of objects is more tractable or more important, then it isn’t so peculiar to say that you study “non-X” objects, often meaning “not necessarily X” objects. Looking for generalities at a higher level is perfectly reasonable.

Although from another view, the work of veterinarians really is absurd, because the same vet may have to treat animals as different as parrots and horses on different days. By the same token, if the intention is to understand non-semisimple algebras in as much depth as the semisimple ones, then it will never happen.

As an aside, I can mention my paper arXiv:math/0209256: Finite, connected, semisimple, rigid tensor categories are linear. The paper shows that if a pivotal category is abstractly semisimple (all exact sequences split) with finite-length objects and finitely many simple objects, then there exists a field over which it is linear. One of the things that came out of this work is that I noticed a possibility in positive characteristic of non-semisimple categories that are not semisimple but act almost as if they were because of inseparable field extensions. I didn’t find any examples though.

Anyway one more remark on pronunciation: The Polish “o”, and most of the other Polish vowels, are no real mystery to anyone who studies non-English European languages. ( :-) ) There is a set of “standard” vowels as they appear in Spanish, for example. One of them is an “o”, and it’s roughly the same in Spanish, French, Russian, Italian, Esperanto, etc. It is a short vowel but it has the same pitch as a English long “o”. Likewise the “i” is a short vowel with the same pitch as the English long “e”. English is peculiar in that all of the vowels have shifted substantially from the European norm.

The only Polish vowels that are peculiar in this sense are ą, ę, ó, and y. Also stress is a question in the pronunciation of Koszul. The simple rule is that most multi-syllable words have the stress on the next-to-last syllable, in this case the first syllable. This is one of the main differences between Polish and Russian pronunciation: in Russian, by contrast, you need a dictionary to tell you the stress and there are even names that are identical but for the stress position.

Regarding Tom Leinster’s comment –

I avoided algebra as an undergrad (spending lots of time on physics, analysis and geometry). But, when I learned the error of my ways, I had to catch up by studying, not just algebra, but

algebras. At this point, the Artin-Wedderburn theorem was one of the first really interesting theorems I met.As Ben points out, it’s one of those seductive results that can fool you into feeling you understand a lot, but only if you don’t read the fine print. The semisimplicity condition, by making the ideal structure very rigid, actually excludes all the really

interesting– or if you prefer, ‘pathological’ – complexity that’s possible for associative algebras.A similar thing happens for Lie algebras. Semisimple Lie algebras are an endlessly fascinating playground; if you spend enough time in there you start feeling very smart, but only because you’ve forgotten how tame it is compared to the wild world outside.

For some fascinating attempts to tame the wild world of non-semisimple associative algebras, I like this book:

P. Gabriel and A. V. Roiter, Representations of Finite-Dimensional

Algebras, Enc. of Math. Sci., 73, Algebra VIII, Springer, Berlin 1992.

Regarding the first comment by Scott Carnahan:

That particular quadratic duality between the operads Comm and Lie is used all over the book “Chiral Algebras” by Beilinson and Drinfeld (See Lemma 1.1.10 and section 1.1.11 explaining it for general quadratic operads). In particular, going from right D-modules (where we think of chiral algebras Lie) to left D-modules (where we think of factorization Comm).

Ben wrote:

Very good! :-)

Scott wrote:

The main application of this which I know is that it is one way to understand why (finite dimensional) L(ie)-infinity algebras are the same as quasi-free differential graded commutative algebras. And similarly fo A-infinity and dropping the graded-commutative.

John Baez once described this aspect of Koszul duality in more detail in TWF 239.

@David Speyer

“The way I like to think of it is: Almost all algebras are semisimple, but semisimple algebras are all alike while every non-semisimple triple is non-semisimple in its own way.”

Nice allusion to Tolstoi, that.

Molodets(=congratulations) !

There is a nice expository set of notes on Koszul algebras written by Uli Krahmer of the University of Glasgow, available at http://www.maths.gla.ac.uk/~ukraehmer/