SF&PA: One more example

Sorry for the delay, Scott’s been in town and so I’ve been too busy doing actual research to get much blogging done. This post was also a little delayed because I didn’t understand this example as well as I’d hoped. I still don’t fully grok it so maybe you all can help me out. If you don’t follow this example, don’t worry, we’ll be moving on to pictures in the next post.

Take a finite group G. Let C[G] denote the group algebra and let F(G) denote the algebra of functions on G with pointwise product (that is F(G) has a basis of elements of the form $\delta_g$ and $\delta_g \delta_h$ is 0 unless $h=g$ in which case it is $\delta_g$). Recall that for both C[G] and F(G) there is a notion of tensor product for modules (for the former g acts on a tensor product via $g \otimes g$, whereas for the latter $\delta_g$ acts on a tensor product via $\sum_{xy=g} \delta_x \otimes \delta_y$

We define a bioidal category called the “group Subfactor” as follows

• The A-A objects are C[G]-modules
• The A-B objects are vector spaces (thought of as representations of the trivial group)
• The B-A objects are vector spaces (thought of as representations of the trivial group)
• The B-B objects are F(G)-modules

Now we need to define the four flavors of tensor products. All of these are of the following form: first push things around (using restriction/induction) until they live where the tensor product is supposed to live, then take the tensor product there.

For example if we take an A-B object (i.e. vector space) V and a B-A object (i.e. vector space) W, their tensor product should be a C[G]-module. To do this we first induce V and W up to G getting two C[G]-modules, then we take their tensor product as C[G]-modules. In particular, if V and W are 1-dimensional, then their tensor product is the regular representation. For another example, suppose we want to take the tensor product of an A-A object (that is a representation V) and an A-B object (that is a representation of the trivial). The answer is supposed to be an A-B object, so we first turn V into a vector space by restriction (forgetting the C[G]-module structure) and then we take the tensor product. One last example, we want to take the tensor product of two vector spaces and get a F(G)-module. So first we take their tensor product as Vector spaces, and then we push that up to an F(G)-module by tensoring with the regular.

This whole tensor product process is a bit more confusing than it ought to be. Can anyone figure out what’s really going on here?

In order to make this a Subfactor category, I also need to fix a simple A-B object called X which tensor-generates. That’s easy here since there’s only one such simple: the one-dimensional vector space. It is easy to see that this tensor generates as $X \otimes X^*$ and $X^* \otimes X$ arejust the regular representation of the corresponding ring and thus contains all simples.

What is the dimension of X? (I’m going to leave the definition of dimension vague for the moment.) The A-A part of the category is just the representation theory of G, so we know the dimensions of objects there. In particular, the regular representation $X \otimes X^*$ has dimension #G. Hence, X has dimension $\sqrt{\#G}$.

Since I haven’t told you much about how actual Subfactor people think about Subfactors, let me sketch the construction in this case. Let N be the von Neumann completion of the countable tensor product of 2×2 matrix rings. N is a hyperfinite II_1 factor. Pick your favorite faithful action of G on a set, and use that action to give an outer action of G on N by permuting the tensor factors in N. Now consider the fixed points N^G = M. Because the action is outer you can prove that M is also a factor, and since G is finite the inclusion M<N is finite index. This is the group subfactor.

5 thoughts on “SF&PA: One more example”

1. Reid Barton says:

“For example if we take an A-B object (i.e. vector space) V and a B-A object (i.e. vector space) W, their tensor product should be a C[G]-module. To do this we first induce V and W up to G getting two C[G]-modules, then we take their tensor product as C[G]-modules.”

Just to clarify, you mean to take the tensor product over C[G] here? Otherwise you won’t get “In particular, if V and W are 1-dimensional, then their tensor product is the regular representation.” (Equivalently, you could do something like what you did for F(G).)

Whereas for two A-A objects you take the tensor product over C, as you described at the beginning? (So in particular, “dimension over C” behaves as expected.)

2. Arg, no, I *meant* tensor over C, but I clearly made a mistake somewhere. You must want to take the tensor product of the vector spaces before you induce. I’m trying to figure out the right way to make this statement from a source that only says how to do it for simple objects.

3. Hi Noah; just a quick question to help me understand the relationship between planar algebras and fusion categories. On your website (by the way, the atlas of unitary fusion categories sounds fascinating) you say

“A planar algebra is a combinatorial model for a pivotal fusion category…The relationship between planar algebras and fusion categories is that the planar algebra describes the Hom spaces between tensor powers of a chosen fundamental object in the fusion category.”

I can see how that works, and it’s great, but I’m confused about the associators… how do I see them in the planar algebra picture? I’d like to know if the planar algebra framework gives an alternative framework to understand things like the 6j symbols. Is there a precise statement somewhere about the relationship between planar algebras and pivotal fusion categories?

4. I’ve been meaning to get back to this series of posts, but I was very busy teaching at Mathcamp, and now I’m trying to finish papers since I’m going on the job market.

Here’s the rapid explanation. Planar Algebras come from a pivotal category together with a choice of (tensor generating) object. The single strand represents the tensor generator, while tensor powers of it are given by other strands. Other simple objects only appear here in the guise of projections (in other words, you have to take Karoubi envelope to recover the whole category).

So explicitly there’s no associators in the theory, because the only objects are V^(x)k, and for tensor products of those the associator is trivial. However, suppose you want to understand tensor products of projections. Then you have some 3j (aka Clebsch-Gordan) elements of the planar algebra which give explicit maps between A (x) B -> C (where A, B, and C here are projections in the planar algebra). Using these you get 6j symbols from tetrahedra.

As for references, it’s not all together at one place yet (Vaughan Jones is writing a book this year though), but good places to start looking are Scott Morrison’s thesis, or Kuperberg’s spiders paper. Another good introduction (which goes the other way, from the planar algebra to the tensor category) is Scott, Emily, and my recent preprint on D_2n.

More later, when I’ve finished writing some papers!

5. Ok thanks a lot. Just from perusing the preprint on D_2n I can see this framework certainly brings a whole lot of new ideas to the table and a new perspective on things.I think it will take me a long while to get it all straight in my head. It seems to me that the planar algebra paradigm -does- bring new insight into the role of the associators in a fusion category. Shooting from the hip, it looks as if the two pictures seem to excel in different areas: it’s easiest to see the fusion rules in the fusion category perspective, but other things (such as, I suspect, the associators) might be clearest in a planar algebra framework.

Comments are closed.