Rhombus tilings and an over-constrained recurrence

I recently visited Robin Pemantle and his student Peter Du at UPenn. We talked about tilings of planar regions, generating functions and asymptotics. Towards the end, we talked about a bit about a very classical example, which is what I want to tell you about today.

In most planar tiling problems, the goal is an asymptotic analysis for tilings of large regions, because there isn’t enough structure to do better. This is the approach taken in the beautiful work of Kenyon, Okounkov, and collaborators.1, 2, 3 Sometimes, there is enough structure to give exact solutions with explicit generating functions. This is the situation with Aztec Diamonds, fortresses, and other several other examples.4,5 The central name here is Jim Propp 6, 7, who has developed this theory together with many undergraduate and graduate students (including me).

And then there is one case: rhombus tilings of a hexagon. These have almost too much structure; more structures than one would expect to be compatibly possible. In this post, I want to talk about this example. In particular, I want to ask you a question which I thought about a bit on the train ride back and see whether any of you have some thoughts.

A QUICK INTRODUCTION TO RHOMBUS TILINGS

Tile the plane with equilateral triangles, and cut out a hexagonal region where the opposite sides are parallel and of the same length. We’ll call the lengths of the sides r, s, t, r, s and t.This hexagon contains equally many up and down triangles, so we can hope to tile it with unit rhombi, each of which cover two adjacent triangles. We write M(r,s,t) for the number of such matchings.

Hexagon with (r,s,t)=(3,2,4)

Hexagon with (r,s,t)=(3,2,4)

If t=0, then the hexagon degenerates to a parallelogram, and there is only one tiling. If t=1, then the horizontal edge has length 1. In that case, the two types of horizontal rhombi form a path from the bottom of the hexagon to the top (colored blue). It is easy to see that there is one matching for each possible path, so M(r,s,1) = \binom{r+s}{r}.

One of the 6!/3!3! tilings of this hexagon

One of the 6!/3!3! tilings of this hexagon

If t=2, this turns into the problem of counting noncrossing pairs of paths; by a happy coincidence, this topic was recently discussed on Math Overflow. More generally, you need to count t-tuples of noncrossing paths, and you wind up computing t \times t determinants. I’m going to skip over these and go straight to a wonderful formula due to MacMahon.

M(r,s,t) = \prod_{i=1}^r \prod_{j=1}^s \prod_{k=1}^t \frac{i+j+k-1}{i+j+k-2}. \quad (*)

Of course, there is a lot of cancellation here, so this can be rewritten in many ways.

One can also give a bilinear recurrence for M(r,s,t), to which we’ll be returning throughout this post.

M(r,s,t) M(r-1,s-1,t) = M(r,s-1,t) M(r-1,s,t) +
           M(r,s,t-1) M(r-1,s-1,t+1) \quad (**).

Starting with the initial condition M(r,s,0) = M(r,0,t) = M(0,s,t)=1, we can recursively use (**) to compute M(1,1,t) for all t, then M(1,2,t) and M(2,1,t), then M(2,2,t) and so forth.

I believe it was Eric Kuo who first realized that (**) has a direct combinatorial meaning.

Kuo's interpretation of (**)

Kuo's interpretation of (**)

Each term in this “equation” involves two hexagons, plus some extra rhombi. The meaning is to take all the tilings of the red hexagon, and all the tilings of the blue hexagon, and superimpose them in all possible ways, together with the excess black rhombi. Then every superposition of rhombi appears with the same multiplicity on the left hand side and on the right.

Why is this important? Suppose you wanted to know the expectation of some statistic of a random tiling — for example, the probability that a particular tile would be used. Then you can just take expectations of both sides of Kuo’s relation. Of course, in order to do this, you need to know the relative sizes to the two right hand terms. This is easy, using the explicit formula (*): (s+t-1)/(r+s+t-1) of the superpositions come from the first term, and the remaining r/(s+t-1) come from the second.

This gives you a linear recurrence, a reasonably nice generating function and, we hope, a target to which we can apply Robin’s machinery. But that’s not what I want to talk about, because Peter and Robin are already thinking about it. Instead, I want to explain what I meant when I said that this example had more structure than you would think possible.

MUSINGS ABOUT THE RECURRENCE

In most tiling problems, you can’t find a simple recurrence like (**) at all. In the nice examples I talked about earlier, like Aztec diamonds, most of the above ideas work: you can reinterpret matchings as noncrossing paths, get a determinant, get some sort of product formula, and get a recurrence. It’s hard to recommend just one paper on this subject, but I like Greg Kuperberg’s exposition. (See Section 7 for the case considered here.)

But, in our case, (**) overdetermines the number of tilings! More precisely, the entire situation is symmetric in r, s and t. So, as well as (**), we have the variants of (**) obtained by permuting the coordinates. If we restrict r, s and t to lie between 0 and N, this gives N^3 variables M(r,s,t), subject to \approx 3 N^3 relations. If you were handed this set of quadratic equations, without being told where it came from, you would expect there to be no solutions! Similarly, running the trick I described above, we get more linear equations for the probability of using a given tile than we have variables.

For easy reference, I’ll write the variants obtained by permuting the variables:

F(r,s,t) F(r-1,s-1,t) = F(r,s-1,t) F(r-1,s,t)
          + F(r,s,t-1) F(r-1,s-1,t+1)

F(r,s,t) F(r-1,s,t-1) = F(r,s,t-1) F(r-1,s,t)
          + F(r,s-1,t) F(r-1,s+1,t-1)

F(r,s,t) F(r,s-1,t-1) = F(r,s,t-1) F(r,s-1,t)
          +F(r-1,s,t) F(r+1,s-1,t-1)

So, here is my question: What are the other solutions F(r,s,t) to this trio of recurrences?

I’ve noticed a few easy things. You can rescale F(r,s,t) by w x^r y^s z^t for any (w,x,y,z). You can translate any
solution through three-space to give another solution. Also, I found one more peculiar solution: F(r,s,t) = 2^{\binom{r+s+t}{2}}.

I currently have no idea how much freedom I have in choosing a solution. Here is a way to make that question precise: Let X_N be the set of nonzero* solutions to these equations, restricted to the range 0 \leq r,s,t \leq N. How does the dimension of X_N grow with N?

So, any ideas?

*I look at nonzero solutions for technical reasons: the variety of all solutions to the equations above has many trivial components, where we take enough of the variables to be zero in order to make all the relations trivially true. By restricting to nonzero solutions, I make sure we are studying the main component.

13 thoughts on “Rhombus tilings and an over-constrained recurrence

  1. I’m too lazy to do any computations myself, so just thinking aloud: if we write G(x, y, z) = F((x + y – z)/2, (x – y – z)/2, z) for x + y + z even, what does the symmetry (r, s, t) -> (s, t, r) become in the (x, y, z) coordinates?

  2. Yes, I like my exposition too. :-) Actually, that paper addresses a pet peeve. For accidental historical reasons, the literature on the Gessel-Viennot method is fairly separate from the literature on the Kasteleyn method. But the methods are really part of the same theory of counting matchings or lattice paths with determinants and Pfaffians. A matching is a special case of non-intersecting lattice paths, and non-intersecting lattice paths can be converted to matchings. Each of the methods can be derived from the other one. I personally was better schooled in the Kasteleyn method, but really both methods should be done together.

    Anyway, to address the question of the post, you’re not entirely clear on the intended range of the indices r, s, and t. Some of these recurrences can be run backwards so that r, s, and t can be positive or negative integers. However, in the motivating solution, r, s, and t are all non-negative. I’m wondering at the moment if all solutions, or some types of solutions, in the first octant can be extended to all of \Z^3. No good ideas on that.

    However, suppose that r, s, and t are non-negative, and suppose further that they have been shifted downward from values with strictly positive lower bounds. So in effect, you can choose minimum positive values r0, s0, and t0. If you do this, then I think that Eric Kuo’s relation holds if you have any weighted edges that you like inside of the r_0 \times s_0 \times t_0 hexagon. I think that you get lots of first-orthant solutions this way.

  3. “I think that Eric Kuo’s relation holds if you have any weighted edges that you like inside of the r_0 \times s_0 \times t_0 hexagon.”

    I don’t think that this works. It seems to me that you need some sort of translation invariance of your weighting. If different translates of the (r,s,t) hexagon have different weighted enumerations, than I don’t see how the whole theory fits together.

    Once you impose translation invariance, there are only three weights available, and in fact there is only one, because all tilings use rs tilings of one kind, rt of the second and st of the third.

    I was being deliberately vague about the range, because I also wasn’t sure what to say. Certainly, the nicest situation would be to have solutions that extend to all of Z^3, but I wanted to leave this open to see what people come up with. (Although my question about dimensions is perfectly well stated.)

  4. It seems to me that you need some sort of translation invariance of your weighting.

    Am I perhaps misled by the diagrams? The hexagons all share the same left corner in the diagrams, so I don’t see the strict need for translation invariance.

  5. But, if you want the rotations to work as well, then the left corner moves to the upper right and the lower right. So, at the first stage of the recursion, you need F(r,s,t) to count tilings of hexagons in three different places. I think that, as you keep running the recursion, you will get translation invariance. But I haven’t checked the details!

  6. Let F be of the form above, with H an arbitrary function. Then I get that the first equation turns into:

    H(t)^2 H(r+s+t-2) H(r+s+t)/ H(r+s) H(r+s-2) =
          H(t)^2  H(r+s+t-1)^2/ H(r+s-1)^2+
       H(t-1) H(t+1)  H(r+s+t-1)^2/ H(r+s) H(r+s-2)

    Setting L(n) = H(n-2) H(n)/H(n-1)^2, a=r+s and b=t, we have

    L(a+b) = L(a) + L(b+1).

    Aha! The solutions to this are precisely L(x) = k(x-1) for some k. (And now I know that I should have defined L by H(n-1) H(n+1)/H(n)^2. But I’ll leave the misstep in place, since I didn’t actually fall on my face.)

    I’m too tired right now to undo all the steps. I have a suspicion that I’ve only rediscovered the solutions I already knew.

  7. According to MathWorld, the G-function is entire, not just meromorphic, although that doesn’t change much for us since it has zeroes and appears in the denominator.

    You can shift r, s, and t by non-integer constants, and then interpret the hyperfactorial formula using the Barnes G-function. (This formula, by the way, is a rephrasing of MacMahon’s formula due to Jim Propp.) This shows that there are non-trivial solutions on all of \Z^3, and that the variety of solutions has at least 3 non-trivial dimensions. It looks like the only zeroes of the G-function are at the negative integers.

  8. We can be a little more general.

    M(r,s,t) = \frac{G(r+s+t+a) G(r+b) G(s+c) G(t+d)}{G(r+s+e) G(r+t+f) G(s+t+g)}

    is a solution to the first of the three recursions iff

    (r+s+t+a-1) = (r+s+e-1) + (t+d)

    or, in other words, if a=d+e. Similar computations give a=c+f and a=b+g.

    This gives 4 parameters which are, I think, independent of the other 4 we already know. Earlier, we saw the solution 2^{\binom{r+s+t}{2}}. I believe this is the limit of (a,b,c,d,e,f,g) = (2N,N,N,N,N,N,N) solution, as N \to \infty.

    We may have found all the solutions, although I have no idea how to prove it. Huh. I had hoped I would get more insight by getting to this point.

  9. We may have found all the solutions, although I have no idea how to prove it.

    You could perhaps get started by taking a bounded range for r, s, and t, such as a cube or a simplex, and then finding a Grobner basis for the equations in some computer algebra package.

    Note also that any one of the three equation families is the famous octahedral recurrence which is better understood. Indeed, is it a cluster algebra?

    Jim Propp once pointed out that this octahedral recurrence closes a circle in this topic. The octahedral recurrence is equivalent to Dodgson condensation, and that is what led to the definition of alternating-sign matrices.

  10. ” Indeed, is it a cluster algebra?”

    Indeed, it is. Or, to be more precise, there is a cluster algebra such that the octahedron recurrence computes a certain subset of its cluster variables. Take the quiver which is an infinite square grid, with all the squares oriented cyclically (half clockwise and half counterclockwise.)

    Recently, quiver theorists have started using some of the methods from tiling theory. See http://arxiv.org/abs/0709.3079 for a nice combinatorial example, and http://front.math.ucdavis.edu/0809.0117 for an overview of the sort of problems they are dealing with. I think that there is a lot left to do in terms of making the languages talk to each other.

  11. Speyer’s formula for M(r,s,t) in (7) is very reminiscent of the DOZZ formula for the normalization of 3-point functions in Liouville theory.

    See for example hep-th/0104158 equation (9). An open question in Liouville theory is if there are any other functions satisfying the same set of defining recurrence relations.

Comments are closed.