When I was a grad student (not too long ago), my advisor would occasionally get excited about what seemed to me rather minor discoveries. They were often notational efficiencies, or an observation that some extra structure comes for free. Since he was the one with tenure, I figured there was probably a good reason to think that these were important ideas. Eventually, I decided that if you can represent math more efficiently in your head, then you can fit more math into active processing at a time, and you’re more likely to pull something interesting out. This is a big deal if you’re trying to formulate a highly structured argument in a proof, or if you just want to learn some math without wasting large chunks of your life.

Today, I’ll explain one of these ideas, which is that groups and Hopf algebras are really the same thing, even though a lot of people will tell you otherwise.

If you’ve taken a class in abstract algebra, you’ll recall that a group is a set equipped with a multiplication map , an inversion map , and an identity element , where is a one point set. These maps are required to satisfy some compatibilities, like associativity of multiplication. You should try to write them down.

Suppose you want to express the fact that inversion takes elements to their inverses, but you don’t want to refer to individual elements. This is difficult using only the data listed above, but there are two additional maps we get for free: and , where the second is the obvious diagonal map: . Using these, we can say: , i.e., multiplying an element with its inverse yields the unit. These extra maps allow us to write the definition of group using only abstract nonsense.

If you’ve seen Hopf algebras before, these structures should look really familiar. If you haven’t seen them, here’s a quick introduction. Given a commutative ring , a Hopf algebra over is a six-tuple , where is an -module, is a multiplication map, is a unit, is called the antipode, is a counit, and is called comultiplication. These are required to satisfy relations, basically asserting that multiplication and unit makes a ring, comultiplication and counit makes a coring (just turn all the previous arrows around), comultiplication and counit are a ring maps, and the antipode satisfies the identity in the previous paragraph.

Again, if you’re new to Hopf algebras, this large amount of structure may seem kind of unwieldy, but it is useful if you want to study representations (also known as modules, or “stuff that your thing acts on”). The counit gives you a trivial representation, the comultiplication gives you a way to combine representations, and the antipode provides duals. One then says that the category of representations has a monoidal structure with duals. There are also many examples of these objects in nature, such as group rings, cohomology of H-spaces, and differential operators on formal groups (these are universal enveloping algebras of Lie algebras in characteristic zero).

Now, suppose we want to formulate the notion of Hopf algebra, but with sets instead of -modules. Our ring is replaced by a singleton (one might say that we were working over the field with one element). We have a unit and an associative multiplication, and this gives us a monoid structure. The counit is uniquely defined, since singletons are terminal in sets. The comultiplication has to satisfy , so it has to take any element to (This was giving me trouble, until Noah pointed out that it was an axiom). Finally, the antipode has to satisfy the identity I gave before: , i.e., it takes any element to something that multiplies to identity, i.e., an inverse. This recovers the structure of a group.

Advanced paragraph: We can get a similar uniqueness result when instead of sets, we consider schemes or topological spaces. The distinguishing feature of these categories, as opposed to -modules, is that the monoidal structure is the categorical product, and this forces the counit to map to the terminal object, making it unique. The additional monoidal structure in -modules allows the coalgebra structure to be non-unique, and this is what makes the theory of Hopf algebras very rich (or a complete mess, depending on who you ask). Extra monoidal structures appear elsewhere in math – Beilinson and Drinfeld define about five different tensor products on D-modules over a curve in their chiral algebras book, although some of these are only pseudomonoidal.

Now that we’ve gone back and forth, we can say that groups are Hopf algebras in sets, or that Hopf algebras are groups in -modules, where group in this case means something with multiplication, and a way to get a monoidal structure with duals on representations. When people say that quantum groups aren’t groups, they are just being unnecessarily restrictive in their notion of group. In fact, the word “quantum” here is just a way to emphasize some noncommutativity. This might come in useful if you’re trying to understand Borcherds’ definition of quantum field theory, which seems to be “a group acting on something.” If you want to construct the standard model from this definition, keep in mind that the group and the something are going to be rather complicated.

Again, if you’re new to Hopf algebras, this large amount of structure may seem kind of unwieldyThere is a second point of view of finite-dimensional Hopf algebras (over a field at least) that makes the axiom set look more reasonable. Namely, a Hopf algebra is a pair of dual vector spaces that are both associative algebras, and that have two extra axioms that make them compatible. One axiom says that comultiplication is an algebra homomorphism, which can be written in a symmetric way so that it also says that multiplication is a coalgebra homomorphism. The other axiom is the axiom of the antipode. The antipode generalizes the inversion map in a group and its axiom is a straight generalization of the group inverse axiom.

Another remark is that of the five structure tensors of a Hopf algebra (product, coproduct, unit, counit, and antipode), only two of them, the product and coproduct, yield any choices. The other three are unique, if they exist. This reinforces the idea that a finite-dimensional Hopf algebra is two algebra structures glued together by vector space duality, with restrictions.

To be sure, this organizing principle has to be modified in infinite dimensions. Vector space duality is no longer an involution, and coalgebras are no longer the same as algebras. (The dual of a coalgebra is an algebra, but of a restricted form.) But they are still “morally” the same in many important cases. You may have to use a completed tensor product to construct comultiplication, and the vector space may have a topology that leads to a restrict dual space (consisting only of continuous dual vectors). These topological modifications generally bring you closer to the algebra-coalgebra symmetry in finite dimensions, or maybe even all the way there in some example. (Unfortunately I’m not prepared to construct any at the moment; maybe I just haven’t thought it through.)

Another thing that you can do is enforce symmetry from the beginning by defining a “Hopf pair” of associative algebras, connected by a bilinear form on the algebras viewed as vector spaces. There are definitely interesting examples of that.

I guess it’s true – I haven’t checked the details – that in any monoidal category where the monoidal product is the categorical product that every object is uniquely a comonoid (object). And moreover this is compatible with any monoid structure on an object. This came up recently when I was trying to explain to a category theorist what a Hopf algebra was. This makes Set (with the cartesian product) a particularly non-standard monoidal category in the sense that every monoid (ie algebra object) is automatically a bimonoid (or bialgebra).

There does seem to be this difference in terminology between category theorists and, say, quantumy folk, in that one group talks about monoids in a category and the other talks about algebra objects in a category. This being due to one lot thinking of Set as the standard kind of category, and the other lot think of Vect as the standard kind of category. However, what the above illustrates is that Set is not a standard kind of monoidal category and monoids in Set are not your standard kind of monoids (i.e. algebra objects).

Yes, Simon, it’s true (what you wrote in your first sentence).

People often say that although you can talk about *monoids* in an arbitrary monoidal category, you can’t talk about *groups* in an arbitrary monoidal category. The justification is that to express the axiom g.g^{-1} = 1, you seem to need the diagonal map G –> G x G.

However, this belief is false – at least if your monoidal categories are symmetric. As Scott’s posting shows, you *can* talk about groups in a symmetric monoidal category: they are what are usually called the Hopf algebras in it. The justification for calling them groups is that a Hopf algebra in a cartesian monoidal category C (i.e. one whose tensor product is categorical product) is simply a group in C.

Yes, indeed, and this is a starting observation behind what categorists call the theory of

cartesian bicategories, which explains what is so special about the (bi)category of relations and its various cousins (like spans and profunctors). In this theory, relations preserve the unique comonoid structures in a lax or colax sense, and functions are precisely those relations which preserve the structure strongly.But I really wanted to say something else. Traditionally a group object is defined relative to a category with cartesian products, and Scott has indicated that a Hopf object in such a category boils down to a group. It is natural to wonder whether Hopf objects are in fact group objects in the traditional sense, by viewing the Hopf object as an object in some other category with cartesian products.

Indeed you can,

ifthe underlying comonoid is cocommutative. For the category of cocommutative comonoids has cartesian products given by the tensor of the symmetric monoidal category where they live. Thus cocommutative Hopf objects become group objects (in the traditional sense) in the cartesian category of cocommutative monoids.Therefore I would say that the notion of cocommutative Hopf object is coextensive with the traditional notion of group object, but more general Hopf objects are an honest-to-god generalization of the notion of group, which nevertheless gives the “right” notion in categories where the tensor is cartesian product. Interesting.

Simon: this is exactly the breakdown that results in the “no-cloning” theorem in quantum computation. More generally, in the category of ordered linear spaces, only the “classical” cones have a diagonal that behaves as we’d like.

Greg,

Here’s an example of infinite duality. Given a finite group, there are two standard ways to put a unital associative algebra structure on the space of functions from the group to your ground ring, and they happen to be dual in the way you describe. One is by pointwise multiplication, and the other is by convolution (producing the group ring). When we extend to infinite groups, the convolution product requires us to restrict to functions with finite support, unless our ground ring has some notion of admissible infinite sums. The same restriction doesn’t work for the pointwise product, since we lose the unit, but we can do the standard augmentation to get functions that are constant almost everywhere. I’m pretty sure this also produces a Hopf algebra, with no topology necessary. I think you can do something similar with functions/volume forms on an algebraic group.

I was under the impression that most of the problems with topologizing duals of (naturally occurring in algebra land) infinite dimensional vector spaces can be avoided by pushing your objects into a sufficiently cleverly chosen tannakian category, such that they separate into indecomposables with finite multiplicity. However, I have had some problems coming up with anything resembling a proof. Maybe this is unsurprising, since I don’t have a precise statement. Does anyone have a concise counterexample?

Scott: In this attempt to dualize the Hopf algebra of an infinite group, I don’t see how you can write down the convolutional coproduct of any element, even those with finite support, unless you let comultiplication take values in some topological tensor product that allows infinite-rank tensors. The coproduct of your thrown-in unit looks like a strange tensor too, but actually I see that it isn’t, it’s just 1 tensor 1 as it should be.

Re your suggestion about Tannakian categories, I do not see what is wrong with topologizing vector spaces anyway. Even if you do not want analytic topologies such as Banach spaces, you may still have to accept adic topologies that arise from inverse limits. Not that there is anything wrong with Tannakian categories either. I would remark that sometimes category theory is an alternative to adding a topology that is not especially simpler, in some cases really just equivalent.

Tom, I was a little surprised when my category theorist friend didn’t know that a Hopf algebra was just a groupy thing in Vect. However, what you say makes that make sense. I guess there’s no monadic description of Hopf algebras, is there? There’s certainly no operadic description: isn’t this where PROPs become useful – in situations where you have things like the coproduct with more than one “output”.

I agree with your guess, Simon, that there’s no monad describing Hopf algebras. I suspect that the forgetful functor from Hopf algebras to Vect doesn’t even have a left adjoint (let alone satisfying the conditions in the Monadicity Theorem).

The idea – as I’m sure you know – is that monads describe algebraic theories, and an algebraic theory consists of operations that take several inputs and produce one output. Dually, comonads describe coalgebraic theories, and a coalgebraic theory consists of operations (or “cooperations”?) that take one input and produce several outputs. The theory of Hopf algebras has both algebraic and coalgebraic aspects, so I would expect it to be described neither by a monad nor by a comonad.

Greg: I agree that the convolution coproduct cannot be described by a conventional tensor product, but the coproduct of any element is still described by a finite amount of data – namely a function that is constant away from finitely many “antidiagonals”, and constant along them. I don’t think a topology is the right way to describe this, because constant almost everywhere functions are not the class of continuous functions for any topology. Unfortunately, I don’t know a good way to formalize this extension of the tensor product.

I don’t have any particular beef with topologies on vector spaces, but I think they are often used when unnecessary, and they sometimes cloud the nature of the structure. One example where an adic topology is necessary is when studying the formal group of an elliptic curve over a finite field. Over a Q-algebra, such a formal group is isomorphic to the additive one, and doesn’t need power series, but otherwise, you get an infinite expression for the coproduct.

I think the terminal Hopf algebra over the field C is just C, so that the forgetful functor Hopf –> Vect doesn’t preserve the terminal object, and that of course would be enough.

It turns out that to get a (generalized) notion of Hopf monad you have to use the concept of bimonad (unless you can see another way). This is shown in the paper “Hopf Monads” by A. Bruguieres and A. Virelizier, available in the arxiv as math/0604180

Charlie, we are discussing whether or not Hopf algebras arise as algebras (aka modules) over some monad. What Bruguieres and Virelizier do is describe when the modules (aka algebras) over a monad look like the modules over a Hopf algebra, namely when the modules form a monoidal category with duals. As far as I’m aware these are not related issues.

My string/surface diagrammatic interpretation of Bruguieres and Virelizier’s paper should be hitting the arXiv sometime in the very near future – in the meantime, if you’re interested, you can look at my Field’s Institute talk: http://www.math.ucr.edu/home/baez/fields/willerton/

Simon, I agree with what you and Tom Leinster say which is why I wrote “generalized” notion of Hopf monad. Hmm, I don’t yet see how these issues are related but what happens, e.g., if we consider the case with bimodules?

I look forward to your paper on the arXiv.

For anyone who might be interested, I just found the paper “Bimonads and Hopf monads on categories” by B. Mesablishvili and R. Wisbauer on the arXiv as 0710.1163

Also, if you google the phrase “Hopf operad” then you will find various papers on the subject but I have not yet looked at any of these papers.

I still think it would be interesting to find out what happens with bimodules within the context that Simon Willerton and I referred to.

Maybe it is the same set of questions, but consider the category of diagrams with objects being non-neg integers, Ys, Xs, and Lambdas, etc as generating morphisms. Etc for strings that end (epsilon and eta) Consider diagrams modulo the axioms of a Hopf alg and X is natural with respect to all morphisms. This is the free-est category that looks like a Hopf alg.

Is this category any more broad than Hopf Algebras and/or modules over Hopf algebras?

At the intro level (say one of Majid’s books), everything that you want to prove about Hopf algebras is contained in these diagrams. Moreover, it is these diagrams that allow things like Greg’s invariants to be defined.

Hmmm, that’s interesting. Doesn’t that mean that there is some sort of internal language working both in Vect and Sets that allows us to reason about Hopf algebras as about groups?

Hello everybody,

I just discover this fine blog! Miguel: Yes there is: that’s the notion of ‘PROP’. There is a ‘PROP’ called GROUP; GROUP-set (ie the category of GROUP-algebras in set) is the category of sets, and GROUP-vect is the category of Hopf algebras. A PROP is very much like an operad, except that in an operad you have only n-1 operations (ie operations with n input and 1 input); in a prop you allow for n-p operations.

Props are more complicated that operads. Around 2001 I proved the existence of free PROPs only just to find out that Pirashvili had just published the same result… The point is you can describe a PROP by generators and relations; the generators are the structural morphisms, and the relations are the axions! The PROP GROUP can be described in terms of the PROP ASS ‘associative algebras’. You take some sort of product of ASS with its opposite prop (with compatibility relation), you add a generator for the antipode and quotient by the antipode axiom. I anyone’s interested I wrote a draft on that but it’s no more than a draft. The best course would be to read Pirashvili’s paper in any case. Since 2001 I’ve been more involved in Hopf monads

à la Montpelliéraine…(a less competitive province so far!). By the way, the bimonads considered by Wisbauer are monad+comonad rather than monad + comonoidal (or opmonoidal).I’d like to remark that Hopf algebras also make sense in a braided category; and the reason why Alexis and I introduced Hopf monads is that Hopf algebras don’t make sense in a non-braided setting, but we need something similar to a Hopf algebra in order to understand the construction of the Turaev-Viro invariant (which thrives in a spherical, non-braided category). On my homepage there’s a text about topological applications (Categorical Centers and Reshetikhin-Turaev Invariants).

Also Charlie mentioned bimodules; I’m not sure I understand what is meant by bimodules here. We did study Hopf modules over a Hopf monad, and it turns out that just as in the classical theory the category of Hopf module is equivalent to the ambient category (under mild assumptions). That’s in our initial paper. But I don’t see how that might relate to the initial question.

Hi Alain,

What is the difficulty in showing that a free PROP exists? For example, the monadic definition of a PROP involves the free-PROP functor, and doesn’t that just give you a free PROP right away?

I assume by free you mean satisfying the probably obvious universal property: so is it not just abstract nonsense that the free PROP functor applied to a S-bimodule gives a free PROP?

(here by S-bimodule I mean a direct sum

over n,m of (S_n x S_m)-modules)

Hi Travis,

I don’t know about this ‘monadic definition of a PROP’. I agree that if

you have a monadic definition of a PROP, then you have a free PROP functor: having a free PROP functor or the monad is one and the same thing.

I’m not sure we’re talking about the same free PROP, though. For me a PROP is a strict monoidal symmetric category whose monoid of objects is the set of non-negative integers N. Operations are morphisms of the PROP.

There’s a functor U : PROP -> Ens/N^2 which sends a PROP to the set of its operations ‘with arities’. The ‘free PROP functor’ is the left

adjoint to U. The idea of the proof for constructing it rather natural, but it took me about 11 pages to write down a rigourous proof.

Now it there’s a shortcut, I’m interested! – Alain

Hmm… Well, the way I understand it is as follows: a PROP is the same thing as a collection of (S_m,S_n)-bimodules M(m,n), together with tensor products

M(m_1, n_1) \otimes M(m_2, n_2) -> M(m_1+m_2, n_1+n_2)

respecting symmetric group structures in the obvious way; and compositions

M(m,n) \otimes M(n,p) -> M(m,p).

Now, I (actually other people) consider the forgetful functor

PROPs -> collections of S_m,S_n-bimodules for all m,n \geq 0.

Then, the adjoint to this forgetful functor should be the

“free PROP” functor, which just takes a collection of (S_m,S_n)-bimodules M, and constructs:

colim_{isomorphism classes of graphs G} M(G),

where M(G) is defined to be the tensor product over all

vertices v of M(in(v), out(v)) where v has in(v) edges coming

in and out(v) edges coming out. The colim involves using

the symmetric group structure of the M(m,n) because edges get permuted under isomorphisms.

The above thing is actually a PROP, using disjoint union of graphs and grafting two graphs together. Call the above functor G, the “free PROP” functor. Let F be the forgetful functor from free PROPs to just collections of (S_n,S_m)-bimodules.

I thought it was clear that F and G are adjoint and hence that G is really the “free PROP” functor. Is this not obvious? Giving a morphism of a collection of (S_n, S_m)-bimodules M to a PROP P is the same thing as giving a PROP morphism from G(M) to P.

In any event, it definitely seems clear that F \circ G forms the functor of a monad such that PROPs are just the same as algebras over this monad.

Now, I do see that in your writing above you consider the free PROP functor from the set of operations rather than the (S_n,S_m)-bimodule of operations: so you also aren’t assuming that the PROP is enriched over vector spaces or is even additive. Does this make things any harder? I imagine one could still take the colimit I wrote above and that everything should just follow from this … If you use sets rather than (S_n,S_m)-sets then probably you should first induce from the a set with action by the trivial group to a set with action by S_n x S_m.

If what I wrote is all correct, then where does the difficulty lie?

(if not, what did I say wrong?)

Thanks!

Hi Travis,

As I haven’t been working on PROPs since 2001, I’m not as fluent as I used to be… but what you’re saying seems correct. I never said that proving the existence of free PROPs was difficult. I just said that the idea of the proof was straightforward (and indeed it involves graphs just as you said) but actually writing it down took me 11 pages and quite some time… Now if you’re willing to take it for granted that what looks obvious is true you can certainly do without it !

However I’ll be interested if you have a general argument which allows one to do without those 11 pages of uninspiring toil.