Let denote the finite field with elements. Let denote the set of complete flags in . Such a flag is a chain of subspaces of , with and . For example, if , an element of is a quadruple , where is a plane through the origin and is a line through the origin, with contained in . Of course, including and is redundant, as is always zero and is always , but we will see that it is convenient to have them around. Now consider the following operation: forget and replace it by one of the other possible -planes lying between and . There are such choices, and we pick uniformly at random. Note that we are **forbidden** to just put back where it was. We will denote this operation by .

Let’s do an example: starting with a given flag, in the case , what is the effect of ? Note that we draw our pictures projectively, so we visualize as a pair (point, line through the point) in the projective plane. Let’s call this pair . The first step moves the point along the line to a new point . The second step pivots the line about the point to a new line . And the final step slides the point along the pivoted line to its final resting place at .

(Click the image for a larger version.) Now, if we are given and , what is the probability that takes us from to ?

**Exercise: ** This probability is zero if , or if any two of the points , and coincide. If none of these unfortunate events occur, then the probability is . (Hint: note that we must take to have any chance of success.

Now, here is the interesting thing. Run through the same analysis for . The intermediate stages of the process will look completely different, but the end effect is precisely the same: probability zero in the degenerate cases mentioned above, and probability the rest of the time.

So we can write the equation and, more generally, . If you are uncomfortable talking about an equality of “processes”, you may interpret as a linear operator on the space of probability distributions on . There are also two other relations between the . The first, which is completely obvious, is that when . The second, which takes only a moment’s more thought, says that . In other words, if we do twice, there is a chance that we will get back the -plane we stared with and a chance that we won’t. Noah Snyder and John Baez can talk for hours about how this shows up in the tensor category of representations of a quantum group, so I won’t. Instead, I’ll explain some of the consequences of this for combinatorialists. At the end, I’ll bring up a very natural question that Jim Propp asked me, to which I don’t have a good answer yet. If no one writes in with a good solution, then I’ll pull together a sequel explaining my failed attempts.

All right, first of all, I have the word Hecke Algebra in the title and I haven’t defined it yet. Let’s fix that right now: The Hecke algebra is the algebra of operators on probability distributions on generated by . That factor of is just there to get rid of the denominators. (Actually, is also more natural than from the general Hecke perspective that John Baez and Jim Dolan are presenting over at the -Category cafe. But is both a little easier to think about, and a lot more natural for Jim’s question.) We can also describe the Hecke algebra as the algebra with generators , , …, and the relations: , when and .

A priori, the Hecke algebra could be huge! One could imagine that it has dimension . In fact, it only has dimension . (Also huge, but not as huge.) To see why this might be the case, let’s imagine plugging in . This doesn’t make sense in if we use the definition in terms of operators on , but the definition in terms of generators and relations works just fine. Our relations now read , when and . Let be the element of the permutation group which swaps and , leaving , , …, , , …, where they are. Then the satisfy the same relations. It is a fundamental result in Coxeter theory that the group generated by , , …, subject to these relations is . So, when , the Hecke algebra is isomorphic to the group algebra of . In other words, if the relation holds in , then the corresponding relation holds in the Hecke algebra (at .)

When is not 1, life is trickier. However, we have the following amazing result: Suppose that the words and are both *reduced*. This means that and there is no shorter way to write as a product of ‘s. Then the relation is still true, for arbitrary . This gives us a way, for any element of , to define a corresponding element in the Hecke algebra: simply take the product where is a reduced word for . Furthermore, it turns out that the are a basis for the Hecke algebra.

Expressing nonreduced products of the ‘s in the basis of ‘s is an area of active combinatorial research. In the probabilistic model at the start of this section, we can describe the problem this way: Suppose we do a Markov walk on , performing one after another. Can we give an explicit description of how this walk mixes? (Of course there are many variants of this problem, because I haven’t told you how to select the sequence of which we use.)

This brings us to Jim’s question. In the Markov description above, there is no interpretation of sending to . Jim asked whether there was an alternate Markov process we could come up with, which modeled the Hecke algebra, but in which was a continuous parameter. (Or, rather, this was my interpretation of a somewhat less specific question.) Presumably, as went to , this process would become the Cayley graph of . I can do this in a stupid way, by defining a certain orientation on said Cayley graph and defining a walk which is random except that, when the walk would take an edge which is directed against it, it stays put instead with probability . But what I think would be interesting would be to define a Markov process on which, as goes to , becomes the Cayley walk on the permutation flags.

Any ideas?

Hi David,

Your question is related to something that Greg Kuperberg asked me once.

I don’t know a solution but here is an idea which I’m sure that you have already thought of. Also,I don’t really know what Markov process means but I bet what I will describe is close.

Fix n, and let .

Consider the space D of constructible functions on F.

Consider the locus which consists of pairs of flags which agree at every spot other than the ith spot and disagree there. So has two pushforwards to F.

Then define an operator by .

Here is “integration of constructible functions” which just means that you look at the Euler characteristics of fibres times the values of the function upstairs (ie first you break the domain into regions where the function is constant, then look at Euler char of these regions intersected with the fibres).

I think that these P_i will satisfy the braid relations. Of course, we don’t have a “q” in the picture so it is hard to say whether they satisfy the relation involving q. In a sense the q would/could come from looking at Poincare polynomial rather than Euler characteristic. In good cases, Poincare polynomial specialize to number of points over the finite field when you plug in q. However, some of the fibre might be bad, so this won’t work.

Is it possible that some kind of motivic picture could get around these difficulties?

Hey Joel.

First a quick note — you never define . I’m going to assume that it is the constructible set where we insist that the i-plane changes. That matches my conventions above, and gives the prettiest formulas.

These will indeed will indeed satisfy the braid relations. They also satisfy .

That’s because, when we do P_i twice, the space of ways to get back where we started is , with Euler characteristic one, while the space of ways to get to a given other point is $\mathbb{C}^*$, with Euler characteristic zero. My understadning is that the theory of motives exactly fixes this; that the motivic integral of $\mathbb{C}$ is not 1 but a formal parmeter $\Lambda$.

I don’t really like this solution, though, because I don’t get to think of actual points smearing around in the flag manifold. (I tried to present something like this to Jim, but I did a really bad job of it, talking about pushing forward sheaves instead of just saying ” integrate with respect to Euler characteristic integral”.) What I’ve been hoping to do is take some actual measure on , concentrated on , and replace Euler characteristic integration with ordinary integrals. Then, as went to one, we could let our measure approach a delta function on Allen’s non-complex Weyl group action.

Oops. I did forget to define Y_i. I’ve edited my original comment — now it says that the flags agree everywhere expect the i plane.

David-

What’s so bad about random walks on Coxeter groups? One can think of a longer permutation as having “lower energy” and the condition on edges that lower in the Bruhat order as measuring how big the energy gap is. That is one could say something like: let the energy of a permutation be . Then tries to switch and by applying an amount of energy between [0,q/(q-1)] (and then stopping it once it reach an equilibrium point). If , then this pushes in an unstable direction, and the numbers will switch no matter how much energy you apply. If , then the amount of energy needs to be at least 1, or you roll back to your starting point.

Perhaps this isn’t physically realistic, but it could still be explained to combinatorialists without making them tear their hair out (as opposed to bringing up mixed Hodge structures)

Joel-

Of course, the categorified version of this picture works great. Instead of functions, consider the derived category of perverse sheaves. Then this pull push is going to give you the action of the shuffling (or twisting, depending on which side of Koszul duality you like) functors on derived category O (the braid group action of interest to those who want to construct HOMFLY homology). However, here the q is coming from mixed Hodge structures. I don’t think those behave very well with respect to decategorification (which is why everything works better in characteristic p).

I wrote things in terms of constructible functions in order to make it sound more like probability. Your start with a constructible function which you can think of as a kind of probability distribution (or at least a weighted count of points on the flag variety). Then you hit it with P_i and get another “probability distribution”.

Yes, I agree with Ben that mixed is the key word for getting the “q” out of the complex geometry.

Hi Dave:

You’re probably aware of the (unique) paper of Diaconis-Ram. If what they are studying specializes to what you’re doing, I vaguely recall that things should conjecturally mix in n^2 steps.