One more warmup post before I get to actual Subfactors. If you were asked to rank finite groups in order of how complicated they were, what measurement would you use?

There are three candidates that come to mind quickly, and a fourth which is a little subtler.

First you could use #G. This has some nice properties, for example there are only finitely many groups of a given size, so you could hope to enumerate all groups of size say less than 100. Representation theoretically we can define #G as the sum of the squares of the dimensions of the irreps. In Subfactor land we will be calling this measurement of complicatedness the “global index.”

However, as a representation theorist this description has some problems. You could have a small group that nonetheless had many representations. So let’s define the rank of a group to be the number of irreps. This also gives a measure of complicatedness of a group. It is a fun excercise (due originally to Landau) to prove that there are only finitely many groups of a given rank (hint: 1 is the sum of #[g]/#G where [g] is the size of a conjugacy class). For example:

- Rank 1: The trivial group
- Rank 2: Z/2
- Rank 3: Z/3 and S_3

In Subfactor land, we will still call the analogous notion the rank.

Finally, suppose you actually wanted to do computations in a group. You’ll quickly discover that it’s much easier to do computations in the symmetric group S_100 than it is in Monster group, even though the former has much larger size and much larger rank. The reason for this is that S_100 has a 99-dimensional faithful representation where you can do calculations, where for the Monster you’re stuck with a 196882-dimensional irreducible representation. So we could instead measure the complicatedness by the dimension of the smallest faithful irrep.

Now the simplest groups are the cyclic groups, the next are the dihedral groups together with the three platonic solid groups, etc. Notice that we don’t have finitely many examples at each level anymore.

In Subfactor land things will be slightly different, the analogy with groups is that a Subfactor is like a group with a fixed choice of faithful representation. The dimension of this irrep is (the square root of) the index. This is the usual measure of complicatedness for subfactors, which was a source of confusion for me for a long time since I was thinking internally about size (or global index).

One final comment about using the size of a faithful irrep as the definition of complicatedness: it works just as well for infinite groups so long as they have faithful finite dimensional representations.

Finally there is a measure for complicatedness in subfactors which is a bit less intuitive than the above three. It is a standard exercise in group representation theory that given a faithful representation, every other irrep appears in a sufficiently high tensor power. Exactly how high a tensor power do you need? That is to say, how hard is it to describe the other irreps in terms of the irrep you already have? In Subfactor land this tensor exponent is called the depth.

For example, if we take S_4 with the standard representation, it has size 24, rank 5, its smallest faithful irrep has dimension 3, and it has depth 3 (since you need to go to the tensor cube of the standard in order to find the sign rep).

These four are the main measurements used in subfactor theory, but for groups I can think of at least one more. Let’s say that a group is least complicated when the group ring is closest to a matrix ring. That is to say it’s simple when the largest irrep is really large (hence that matrix factor takes up most of the group ring). Perhaps surprisingly it’s possible to say some nice things about what groups are “simple” with respect to this measurement (shameless plug).

I actually found myself wondering about this question (although not able to give a particularly good answer) a couple days ago, when I was thinking about how one might define a reasonable notion of a “random group” chosen from all the groups of order n (for some positive integer n). Sure, you can just pick uniformly at random… but that’s boring!

And no, I don’t know why anyone would want to pick a group uniformly at random… but why not?

For p-groups there’s an interesting result due to Charles Sims (which I heard about from Lenstra and which is cited in a recent paper of Poonen) which I think says that picking isomorphism classes uniformly at random yields that “most p-groups are 3 step-nilpotent.” The reference is:

Charles C. Sims, Enumerating p-groups, Proc. London Math. Soc. (3) 15 (1965), 151–166. MR0169921

I’d love if there were some good way to prove theorems like “a group chosen in some suitably random fashion tends to have the dimensions of its irreps distributed in the following way.” However, you have to be careful or else this just turns into a question about 3-step nilpotent 2-groups.

Noah,

is there reason to believe that there should

besuch theorems? I don’t know enough representation theory to comment intelligibly on that.If by a random group you mean that you randomly choose a group presentation (in some suitable sense) and look at what you get, then there is a wonderful theorem of Gromov which says that the group is almost certainly hyperbolic. One might also check out his delightful paper “Random walks on random groups”.

Of course, these are usually infinite groups, but so it goes.

Isabel,

I think Hardy and Ramanujan had some results about the asymptotic distribution of sizes of largest prime factors of integers. The case of irreducible representations is analogous in the sense that you are looking at the largest irreducible submodule of the group ring under the regular action. One big difference is that in number theory, primes add very poorly and multiply well, while representations add well and multiply nontrivially.

I dont think that results on the largest prime factor of integers are due to Hardy and Ramanujan (results on this seem to go back to Dickman); but they certainly found what is the average _number_ of prime factors of integers (it’s about $\log\log n$).

Interestingly, for certain models of random (typically infinite) groups (with as many relations as generators; I’m not sure about those of Gromov) one can show that the abelianization tends to be finite with very large probability, and that its size behaves then more or less like a “random” integer in terms of number of prime factors.

For representations, it seems that there haven’t been much detailed works on the maximal degree of an irreducible representation, even for non-random groups. I once tried to find this in the literature for groups like GL(n,F_q), where n is fixed, or symplectic groups, but without success. The answer is pretty intuitive however: for GL(n), there is a large family (the “principal series”) of representations, containing a positive proportion of all representations, of the same maximal degree which is the prime-to-p part of the order of GL(n,F_q) divided by (q-1)^n; this is about q^{(n^2-n)/2}; for GSp(2g), the group of symplectic similitudes,

the maximal degree is given by the same formula with (q-1)^n replaced by (q-1)^{g+1}. (These results hold for q large enough in terms of n or g).

This post raises issues that have been investigated by algorithmic group theorists such as Laszlo Babai. Because of a paper that I coauthored that was sent to his journal, Babai pressed me to make the following definition: Say that a sequence of groups is “explicit” if each group is defined by a group law on the integers from 1 to N, together with an algorithm to compute the group law in polynomial time in log N. Say that two explicit forms of a group (actually a sequence of groups) are equivalent if there is a bijection between them that can be computed in polynomial time in log N in both directions. This is a reasonable computational abstraction of representing the permutation group by permutations and matrix groups by matrices.

Babai remarked that explicit groups are probably not unique up to equivalence. For example, the cyclic group on p-1 elements has a second explicit form given by multiplication in F_p. If the discrete logarithm problem is hard, then one direction of the bijection is hard to compute.

He also emphasized that there is no known explicit form for all finite groups. It is plausible that an explicit form exists, but the problem is open even for solvable groups. (I’m not sure about nilpotent groups, I would guess those as well.) For solvable groups, there is a quasipolynomial-time explicit form, and this is a recent and highly non-trivial result. What makes the problem difficult is the length of the composition series. If the composition series has bounded length, then standard algebra gives you an explicit form, but the exponent of the computation time goes up with the length.

Based on this experience, you could imagine a complexity for finite groups that begins with the length of the composition series, and then looks at the complexity of each term. Or you could look at the length of a chain of non-normal subgroups and also somehow use the complexity of the consecutive pairs. Both of these are reasonably well-motivated by algorithmic group theory.

There is some relatively recent work by R.A. Wilson and some others about explicit computations in the monster, where you can get answers without resorting to multiplying 5GB matrices. There should be some notion of complexity of the monster such that this work lowers an upper bound, but it doesn’t quite fall in to the regime of Greg’s comment, unless you consider something like the sequence of n-fold products of the monster, and do finer analysis of constants.

I just realized that the last sentence of my last comment doesn’t make much sense, so I’ll try to salvage something from it. Positive integers have unique multiplicative decompositions into primes, but this behaves poorly under addition. Essentially all structure is local, and there are some well-known theorems and conjectures making this precise. Representations have unique “additive” decomposition into irreducibles, and this behaves somewhat better under tensor product, because the distributive law is working in the right direction.

Explicit algorithmic computations with the monster group, and analogously with the Lie algebra or Lie group E8, are a weird topic in computational algebra because (a) they are hard, but (b) they are not part of an infinite sequence. It’s a widespread convention to ignore constant factors in defining time or space complexity, but in these two examples, the constant factors are the whole problem!

It does not help much to pass to a direct product of copies of the monster group.

Still, you could ask about the smallest Boolean circuit to multiply two group elements in the monster. Or you could throw in gates to express arithmetic in Z/p for various small p. It would be a messy problem, but one that might lead to some kind of publishable result.

In Kolmogorov complexity theory, a randomly chosen bit string almost always has high complexity (in fact, bit strings are called “random” if they have no compact description.) This fact is proven using a simple counting argument, but I’m wondering if it extends to groups using any of the definitions you give.

I think this is a great post! Discussing “how complicated groups are,” or what are the appropriate measure for complexity in group theory, seems like a great topic for a blog discussion, and a place where a blog discussion can really have an edge over traditional forms of mathematical communications.