How to write down the representations of GL_n

A few years ago, I gave a talk at NCSU on some work I had done on Littlewood-Richardson numbers, cluster algebras and such things. For the first half hour or so, I outlined the basic results I would be using about the representation theory of the group GL_n. Afterwards, I had a number of grad students thank me for this. So I’m going to try to turn that into a blog post (and enlarge it a little). The goal here is not to give you any proofs; rather, I want to get to the main results, show you how they connect and, above all, how to actually write down the representations of GL_n.

GL_n is, of course, the group of n \times n complex matrices with invertible determinant. We want to classify the linear representations of GL_n, meaning we want to find group homomorphisms from GL_n \to GL_N. Some examples: We have the trivial representation, where N=1 and every matrix in GL_n is mapped to the identity. We have the determinant representation, where N=1 again, and g is mapped to (\det g). We have the standard representation, where N=n and g is mapped to itself. We have the dual of the standard representation, which is given in coordinates by g \mapsto (g^T)^{-1}. We have the symmetric representations, where GL_n acts on the symmetric powers of the standard representation, and the anti-symmetric or exterior representations, where GL_n acts on the anti-symmetric or wedge powers of the standard representation. If you have studied almost any field of math, I think it is safe to say that you have frequently encountered these examples; hopefully, that will suggest to you that classifying all representations of GL_n is a worthwhile problem.

Now, there is a technical point we have to get out of the way. If all we ask for is a group homomorphism, there are far too many. For example, we can use complex conjugation to get maps like g \mapsto \overline{g}. More confusingly, we can use other field automorphisms of \mathbb{C}, to get highly discontinuous maps. Another way things can get odd is that, as a group, \mathbb{C}^* has a lot of automorphisms (at least, if you believe in the axiom of choice), we could compose \det with any of these. Moreover, we could take any of these weird examples and tensor them with a normal example to get more weird ones. So we will want to come up with a rule that excludes these examples and limits us to the more algebraic examples of the preceding paragraph.

I’m an algebraic geometer, so my preferred fix is to require that the map \rho : GL_n \to GL_N is an algebraic map. This means that every entry of \rho(g) be given by a polynomial in the entries of g and \det(g)^{-1}. (In most of the examples I gave of representations, the entries of \rho(g) are polynomials in the entries of g, but in the dual representation we need to have \det^{-1} as well.) One of the ways that I would defend my preferred choice is to point out that many seemingly different choices give the same result. You could also require that \rho be holomorphic, and you would get the exact same set of maps. You could (this is the physicists’ choice) study the unitary group, U(n), and require your maps to be continuous (or, alternately, smooth); then every representation you found would extend uniquely to an algebraic representation of GL_n. You could take the definition that I originally gave, using algebraic maps, and run it over any field of characteristic zero, and the description I will give in this post will still be correct. For that reason, I am trying to choose my notation to avoid mentioning the complex numbers whenever possible, although being perfectly consistent about this would be more of a pain than I think it is worth.

The first thing you need to know about GL_n is that it is what is called a reductive group. That means that any finite dimensional representation of GL_n splits as a direct sum of irreducible representations. (An irreducible representation is a representation which contains no subrepresentations other than {0} and itself.) This splitting is unique in the appropriate sense, which takes a little effort to state correctly. For an example of a group that is not reductive, consider the group of complex numbers under addition; the representation a \mapsto \left( \begin{smallmatrix} 1 & a \\ 0 & 1 \end{smallmatrix} \right) can not be split into irreducible representations. So, we will be done if we can understand the irreducible representations of GL_n.

It isn’t just finite dimensional representations that are tamed by reductiveness. If X is any algebraic variety (of finite type over \mathbb{C}) and GL_n \times X \to X is an algebraic action of GL_n on X, then it is easy to show that the coordinate ring, \mathbb{C}[X], of X has an ascending filtration by finite dimensional GL_n representations. Reductiveness lets us split this filtration, so we get that \mathbb{C}[X] is an infinite direct sum of finite dimensional irreducible representations. In particular, we can consider the action of GL_n on itself. Better, we can consider the action of GL_n \times GL_n on itself, with one copy of GL_n acting from the left and the other on the right. Explicitly, the action is (g,h) : x \to g^{-1} x h.

(Those inverses are not where you expect them because the correspondence between X and \mathbb{C}[X] is contravariant. I’d advise you not to worry too hard about this.)

The Peter-Weyl theorem: the coordinate ring \mathbb{C}[GL_n] of GL_n, as a GL_n \times GL_n representation, is \bigoplus_{\lambda} V_{\lambda} \boxtimes V_{\lambda}^*.

The sum runs over the isomorphism classes of irreducible representations of GL_n. I prefer to rewrite the summand as \mathrm{End}(V_{\lambda}). (Note that the Wikipedia link, at least today, states this result in the analytic setting rather than the algebraic one. This is just another example of how which category you work in doesn’t matter very much for reductive groups.) If you have seen some representation theory, then you should be familiar with Peter-Weyl theorem in the setting of finite groups, where it states that the regular representation decomposes in this manner.

Let’s take an easy example. If n=1, then \mathbb{C}[GL_1] is \mathbb{C}[t, t^{-1}] = \bigoplus_{j=-\infty}^{\infty} \mathbb{C} t^{j}. The irreducible representations of GL_1 are indexed by the integer j; the j^{\mathrm{th}} representation is g \mapsto g^j.

There are two good ways to use the Peter-Weyl theorem to describe the representation of GL_n.

The first, analogous to the use of characters in the representation theory of finite groups, is to look at those functions on GL_n which are invariant under action of the diagonal. (I just realized that the word “diagonal” is ambiguous. I am talking about functions which are invariant under the subgroup (g,g) of GL_n \times GL_n.)

On the one hand, these are the functions f such that f(x)=f(g x g^{-1}). In other words, functions which depend only on the conjugacy class of their input. Now, the diagonalizable matrices are dense in GL_n, so any function on GL_n is determined by its values on the diagonalizable matrices. Furthermore, if f is to be a conjugacy invariant function, then the value of f on diagonalizable matrices is determined by its value on diagonal matrices. So we can describe such an f by giving its value on \mathrm{diag}(t_1, \ldots, t_n). Finally, note that \left( \begin{smallmatrix} a & 0 \\ 0 & b \end{smallmatrix} \right) is conjugate to \left( \begin{smallmatrix} b & 0 \\ 0 & a \end{smallmatrix} \right); more generally, conjugation can reorder the entries of a diagonal matrix arbitrarily. So f must be a symmetric function of the t_i. Moreover, since we are working with algebraic maps and algebraic functions throughout, f must be a symmetric Laurent polynomial in the t_i. Conversely, any symmetric Laurent polynomial gives a conjugacy invariant polynomial function on GL_n.

On the other hand, from the presentation \mathbb{C}[GL_n] = \bigoplus_{\lambda} End(V_{\lambda}), we see that each irreducible representation V_{\lambda} contributes a single basis element s_{\lambda} to the diagonal invariants in \mathbb{C}[GL_n]. (Namely, the identity map from V_{\lambda} to itself.) Explicitly, s_{\lambda}(g) is the trace of g acting on V_{\lambda}. For example, the standard representation gives us \sum t_i and the dual of the standard representation gives us \sum t_i^{-1}. The symmetric functions s_{\lambda} are called Schur functions; by the discussion of the previous paragraph, the Schur functions are a \mathbb{C}-basis for the symmetric Laurent polynomials.

Now, there is an obvious basis for the symmetric Laurent polynomials, namely, the monomial symmetric functions. There is one of these for each decreasing n-tuple of integers, \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n. We just take the sum \sum t^{\lambda_1}_{\sigma(1)} \cdots  t^{\lambda_n}_{\sigma(n)} where \sigma ranges over the permutations of \{ 1, 2, \ldots, n \}.

So, on some intuitive level, we expect that there is one irreducible representation for each such n-tuple \lambda. The idea which makes this precise is the idea of high weight vectors. If V is a representation of GL_n then v \in V is called a high weight vector if v is fixed by every upper triangular matrix with {1}‘s on the diagonal.

Theorem 1: In every irreducible representation, up to scaling, there is a unique high weight vector.

For example, in the standard representation, (1,0,\ldots,0)^T is the high weight vector. It is also easy to check that the diagonal matrices send high weight vectors to themselves. So, if V is an irreducible representation and v its high weight vector, then diag(t_1, t_2, \ldots, t_n)v=t_1^{\lambda_1} \cdots t_n^{\lambda_n} v for some \lambda.

Theorem 2: In the above setting, we always have \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n. Conversely, for decreasing n-tuple of integers, there is a unique irreducible representation such that the high weight vector transforms in this manner.

Now you know a set which is in bijection with the irreducible representations. But I promised I’d tell you how to write them down. The way to do this is the second good way to use the Peter-Weyl Theorem. We introduce the notation N for the group of upper triangular matrices with {1}‘s on the diagonal. So Theorem 1 tells us that, if we take N \times \{ 1 \} invariants in \mathbb{C}[GL_n], we get \bigoplus \mathbb{C} \boxtimes V_{\lambda} = \bigoplus V_{\lambda}.

Now N \times \{ 1 \} invariant elements of \mathbb{C}[GL_n] wind up corresponding to functions f such that f(ng)=f(g) whenever n \in N. Some obvious functions with this property are the coordinate functions on the bottom row of our matrix: namely, g_{n1}, …, g_{nn} on GL_n. More subtly, any bottom justified minor is invariant under left multiplication by N. For example, if n=3 and we write coordinates on GL_3 as

\begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix},

then the determinant \left\| \begin{smallmatrix} d & f \\ g & i \end{smallmatrix} \right\| is left N-invariant.

In fact, these determinants, along with \det^{-1}(g), generate the ring of left N-invariants in \mathbb{C}[GL_n]. There are many proofs of this, my favorite is Theorem 14.11 of Miller-Sturmfels.

So, the ring generated by these determinants is \bigoplus V_{\lambda}. How do you write down an individual V_{\lambda}? You can extract this from everything I’ve said, but I’ll just tell you the answer. V_{\lambda} is the vector space spanned by products which use \lambda_1 - \lambda_2 determinants of size {1}, \lambda_2 - \lambda_3 determinants of size {2} and so forth, up to \lambda_n determinants of size n. (Note that \lambda_n may be negative.)

(The statement that this recipe works is essentially the Borel-Weil Theorem in the algebraic category. More specifically, let B be the group of upper triangular matrices. We showed that the space of functions on N \backslash GL_n which transform in a certain way under the diagonal matrices is the vector space V_{\lambda}. Borel-Weil says that the sections of a certain line bundle on B \backslash GL_n is the same vector space. The equivalence between these two viewpoints, compared to what came before, is not bad.)

Let’s wrap up with an example: we’ll take n=3 and (\lambda_1, \lambda_2, \lambda_3)=(2,1,0). We must look at products which use one determinant of size one and one determinant of size two. Using the coordinates on GL_3 above, we need to look at the vector space spanned by

g  \left\| \begin{smallmatrix} d & e \\ g & h \end{smallmatrix} \right\|, g  \left\| \begin{smallmatrix} d & f \\ g & i \end{smallmatrix} \right\|, g  \left\| \begin{smallmatrix} e & f \\ h & i \end{smallmatrix} \right\|, h  \left\| \begin{smallmatrix} d & e \\ g & h \end{smallmatrix} \right\|, h  \left\| \begin{smallmatrix} d & f \\ g & i \end{smallmatrix} \right\|, h  \left\| \begin{smallmatrix} e & f \\ h & i \end{smallmatrix} \right\|, i  \left\| \begin{smallmatrix} d & e \\ g & h \end{smallmatrix} \right\|, i  \left\| \begin{smallmatrix} d & f \\ g & i \end{smallmatrix} \right\| and i \left\| \begin{smallmatrix} e & f \\ h & i \end{smallmatrix} \right\|.

These are 9 products here, but they span a vector space of dimension 8 because

g  \left\| \begin{smallmatrix} e & f \\ h & i \end{smallmatrix}\right\| -  h  \left\| \begin{smallmatrix} d & f \\ g & i \end{smallmatrix} \right\| +  i  \left\| \begin{smallmatrix} d & e \\ g & h \end{smallmatrix} \right\| =0.

Choose 8 of these products to get a basis, and it is no trouble at all to write down V_{2,1,0}. In particular, it is easy to compute the Schur function s_{2,1,0}; it is

t_1^2 t_2 + t_1^2 t_3 + t_2^2 t_1 + t_2^2 t_3 + t_3^2 t_1 + t_3^2 t_2 + 2 t_1 t_2 t_3.

As a final remark, there is no completely natural way to choose 8 of the 9 products above, and this problem only becomes worse as \lambda grows. There are some nice ways though, and one of the simplest is described in Corollary 14.9 of Miller-Sturmfels. An alternate way is the subject of my recent note with Kyle Petersen and Pavlo Pylyavskyy.

23 thoughts on “How to write down the representations of GL_n

  1. Hey, I’m guessing you didn’t meant what you wrote in the first line of the second paragraph …

  2. On other typos, it’s probably a good idea to get your advisor’s name right.

    Incidentally in the U(n) setting it’s enough to ask that the representation be measurable; then it will automatically be continuous, smooth, real algebraic…

  3. Thanks for this post, David. It’s quite telling that the grad students at NCSU thanked you for talking about the reps of GL(n). Everyone should know about them (at least the finite-dimensional algebraic ones), and the results aren’t all that hard to state, but somehow they’re not taught in the required grad courses, most place. I think blog are a great place to explain chunks of important mathematics like this, emphasizing intuition but not giving full proofs. I guess that Japanese Encyclopedia of Mathematics and the Wikipedia are two of the few other places one can find such explanations.

  4. Thanks for this post.
    Is it true that if we take \lambda_1 = p, and other 0 in your construction, we get the p-th symmetric power of V,
    and for \lambda_1 = ... = \lambda_p = 1 and other 0 we get p-th exterior power,
    so that both these representations are irreducible?

  5. Sorry for a naive question (I’m quite new to this level of algebra) but could you elaborate on “the coordinate ring, \mathbb{C}[X], of X has an ascending filtration by finite dimensional GL_n representations” ? My guess is that the filtration means existence of rings \emptyset \subset R_1 \subset R_2 \subset \ldots \subset \mathbb{C}[X] with $\bigoplus_i R_i/R_{i-1} \cong \mathbb{C}[X]$, but I don’t see what are those rings. Is it something obvious?

  6. They’re not subrings, just sub-vector spaces with R_i R_j \subseteq R_{i+j}. And \bigoplus R_i/R_{i-1} may not be congruent to \mathbb{C}[X].

    For a baby example of the second point, consider the ring \mathbb{C}[x,y]/(xy-1). This has an ascending filtration where R_i is spanned by 1, x, y, x^2, y^2, …, x^i, y^i. If we write X and Y for the images of x and y in R_1/R_0, then XY=0 (because 1 is in $R_0$), so \bigoplus R_i/R_{i-1} is isomorphic to \mathbb{C}[X,Y]/XY, which is not isomorphic to \mathbb{C}[x,y]/(xy-1).

    As for where this filtration comes from: It is always possible to embed X into a finite dimensional GL_n representation V in a GL_n equivariant way. Letting x_1, x_2, …, x_N be the coordinates on V; the filtration is obtained by taking R_d to be the polynomials in the x_i‘s of degree less than or equal to d. (Of course, there are probably relations between these polynomials.) I’m tired, so I’ll leave it to you to prove that the embedding X \to V exists.

  7. Thank you for the post. I’d love to understand how 14.11 in Miller-Sturmfels implies that all of the $Nx1$-invariant polynomials are in the algebra generated by the bottom-justified determinant minors. The theorem in the edition I have says that the collection of these determinants form a sagbi basis for the algebra they generate — why does that imply that all the $N$-invariants are in that Plucker algebra?

    Just in case, I posted a question about this on math.stackexchange as well (http://math.stackexchange.com/questions/75617)…

    Thanks again!

  8. Or, David, did you perhaps mean Corollary 14.9 from Miller-Sturmfels (semistandard monomials form a vector-space basis for the Plucker algebra)? Then if you already know the dimension of each irreducible representations of GLn, and if you convince yourself that the irreducible representations as found in the coordinate algebra will actually appear in homogeneous components, then you can argue that the Plucker algebra equals the N-invariants algebra by dimension…

  9. Still me. Scratch my previous comment. You couldn’t have meant Corollary 14.9 because you referenced it at the end of your post in a way that’s meaningful to me. So this isn’t a question of typos or different editions of the book.

  10. Is it supposed to be clear at the end how the representations you’ve constructed are related to the known symmetric power and exterior power representations? What does your example rep look like in terms of tensor reps? Do we learn about representations that are not symmetric/antisymmetric powers of the standard and dual rep?

  11. You say we should exclude pathological cases by considering alebraic reprepsentations only, but that these will already include all the continuous representations.

    But doesn’t that exclude apparently useful representations like the pseudotensor rep |det g|? In fact, why isn’t this a counterexample to the assertion that the algebraic reps include the continuous ones? It appears to be continuous but not algebraic.

    For that matter, what about the example you give, the complex conjugation of the standard representation? Surely complex conjugation is continuous. I mean, it’s continuous in the standard topology. I guess complex conjugation is not continuous in the Zariski topology, but of course the statement “the algebraic maps include the continuous maps” is tautological in the Zariski topology.

  12. As I wrote:

    You could … study the unitary group, U(n), and require your maps to be continuous (or, alternately, smooth); then every representation you found would extend uniquely to an algebraic representation of GL_n.

    Every continuous representation of U(n) extends uniquely to an algebraic representation of GL_n. For example, |\det  \ | on U(n) extends to the trivial representation of GL_n.

  13. Hi David-

    Thank you for the reply and the clarification. I do see now that there is no contradiction. Taking the other example, I think complex conjugation on U(n) extends to the dual representation of GL_n.

    But these continuous but non-algebraic representations of GL_n, can they be classified?

Comments are closed.