Consider two finite dimensional vector spaces and and a linear map between them. Then we can decompose as where is the kernel of and is any subspace transverse to . Similarly, we can write as where is the image of . So we can write as the direct sum of , the identity map from and . At the cost of making some very arbitrary choices, we may simplify even more and say that we can express as the sum of three types of maps: , the identity map and (where is our ground field.)
Now, suppose that we have two maps, and from to . We’ll start with the case that and have the same dimension. If is bijective, then we can choose bases for and so that is the identity. Once we have done that, we still have some freedom to change bases further. Assuming that is algebraically closed, we can use this freedom to put into Jordan normal form. In other words, we can choose bases such that are direct sums of maps like
(Here several different values may occur in the various summands, and of course, the matrices can be sizes other than .) If we don’t assume that is bijective (and if we want to allow and to have different dimensions) we get a few more cases. But the basic picture is not much worse: in addition to the summands above, we also need to consider the maps
(for various sizes , not just ) and the transpose of these. These three possibilities, and their direct sums, describe all pairs up to isomorphism.
Now, consider the case of three maps. As the dimensions of and grow, so do the number of parameters necessary to describe the possible cases. Moreover, almost all cases can not be decomposed as direct sums. More precisely, as long as is between and , the maps which can be expressed as direct sums of simpler maps have measure zero in the . (Where did that number come from? Stay tuned!) In the opinion of experts, there will probably never be any good classification of all triples of maps.
The subject of quivers was invented to systemize this sort of analysis. It’s become a very large subject, so I can’t hope to summarize it in one blog post. But I think it is fair to say that anyone who wants to think about quivers needs to start by learning the connection to root systems. So that’s what I’ll discuss here.
A quiver is simply another name for a directed graph. This means that we have a finite set of dots, called , and a finite bunch of arrows , with each arrow pointing from one element of to another. (Some sources will let an arrow point from a dot to itself, but I’m going to forbid that.) The quiver as a whole is denoted . A representation of consists of (1) for each dot , a finite dimensional vector space and (2) for each arrow , a map . Note that there are no relations imposed between these maps. The dimension vector of the representation is the vector in whose -th component is . It is common in quiver theory to fix a dimension vector and try to study all representations of that dimension. If we have two representations and of the same quiver, their direct sum, , is defined in the obvious way: and similarly for the maps. A representation is called indecomposable if it can not be written as a direct sum of smaller representations. A vector is called a positive root if it is the dimension vector of an indecomposable representation.
Our three examples above corresponded to the case where had two elements and there were one, two or three arrows (respectively) connecting them, all in the same direction. The positive roots were , and in the first case and , and (for all positive ) in the second.
We can now state a very surprising result, due to Kac:
Theorem 1: The set of positive roots is unchanged by reversing edges of .
Kac went much further than that, and gave an explicit description of the set of roots. In the rest of this post, I will explain that result. This post draws heavily on Kac’s paper Infinite root systems, representations of graphs and invariant theory.
The quadratic form
Let’s fix a dimension vector and try to figure out how many nonisomorphic quiver representations of dimension we expect. Let be where is the space of matrices. So every point in gives a representation of of dimension . (Just view a matrix in as a map from to .) Two points in give isomorphic representations if we can obtain one from the other by changing bases.
It will be worthwhile to write out what “changing bases” means in a very formal way. Let . Then acts on : if is an element of and is an element of then . Isomorphism classes of -dimensional representations of correspond to orbits of on .
Let’s do a dimension count. is a vector space of dimension . The group has dimension . We set . Notice that is unchanged by reversing edges of , consistent with Theorem 1. So, if is very negative, we expect to have a lot of nonisomorphic representations of dimension , because will be much larger than . Since not thatmany representations can be expressed as direct sums, when is negative, we expect to be a positive root.
In the other direction, if is very positive, we expect representations of dimension to have large automorphism groups. Now, every every point in has at least a one dimensional stabilizer: just multiply each vector space by the same scalar. Let be a representation of whose stabilizer is larger than this. Then we should “expect” to be decomposable. I’ll give a more precise statement when our ground field is ; the generalization to fields other than requires the vocabulary of algebraic groups, so I won’t get into it. Let be the stabilizer of in . Then is decomposable if and only if contains a copy of other than the trivial one mentioned above. The proof is simple: if , we can get a nontrivial stabilizer by rescaling and leaving alone. Conversely, suppose we have a nontrivial in the stabilizer, which we’ll write as . Then we have where . So, if , we should expect representations of dimension to decompose as direct sums. In other words, when is positive, we should expect not to be a positive root.
<h2>Kac’s result, special case</h2>
The quadratic form is called positive semi-definite if for all . It is called positive definite if it is positive semi-definite and only when . Finally, is called hyperbolic or better if, for every , the restriction of to the hyperplane is positive semidefinite. “Hyperbolic or better” is my own terminology. The definition of hyperbolic, which is a more standard term, is “hyperbolic or better, but not positive semi-definite”.
The cases where is positive definite correspond to the simply-laced Dynkin diagrams: that is , and , , . The correspondence is simply to draw the graph and ignore the directions of the arrows. The positive definite and hyperbolic cases have also been completely classified. (I’m too lazy to draw all those Dynkin diagrams right now. The positive definite types are the diagrams , , , and here, the positive semi-definite quivers, besides the positive definite ones, are the types , , , and here.)
Theorem (Kac): Suppose that is hyperbolic or better, and let be a dimension vector. Then:
If , then every representation of of dimension is decomposable.
If , then there is (up to isomorphism) a unique indecomposable representation of dimension .
If and is infinite, then there are infinitely many nonisomorphic indecomposable representations of dimension .
In short, with the hypotheses on in place, is a positive root if and only if .
Let’s look at our starting examples: if is the quiver corresponding to a single map from one vector space to another, then . This is positive definite. The only times that it is as small as are , and . Indeed, these correspond to the three summands that turned up in that case. If corresponds to having two maps, then . This is evidently positive semi-definite. We have when is of the form or , and we found a unique indecomposable representation in that case. We have exactly when for some ; in that case our indecomposable representation depended on a parameter . Finally, when there were three maps, . We can now understand why it was important to have : that’s exactly when . (Exercise: For which integers do we have ?)
Our last example will demonstrate that the heuristics in the previous section were too simple. Consider the quiver with three vertices, , and and three maps , and . (This is NOT cyclically symmetric.) So , which is and hence positive semi-definite. We’ll consider the dimension vector , for which . Kac’s theorem tells us that there should be a single (up to isomorphism) indecomposable representation of this dimension and, indeed, there is: take , and . In this example, has dimension and has dimension . The heuristics of the previous section suggest that the indecomposable representation, above, should lie in a dense orbit, and have only the trivial stabilizer. In fact, the orbit in question is four-dimensional, and the stablizer is . (Exercise!) But this group contains no nontrivial copy of , so the representation is indecomposable as desired.
The reflection functors, discovered by Bernstein, Gelfand and Ponomarev, are an important way to make new quiver representations from old. Let be a source of ; meaning that is a vertex of and every edge bordering is directed away from . Let be the quiver obtained by reversing all edges of which border (and leaving the other edges alone). Let be a representation of . Define to be the following representation of : For every vertex other than , we have . We define to be the cokernel of , where the map is the direct sum of all the individual maps . For each edge of (with corresponding edge in ) we define the map by the composition . (For edges of which do not border , we just use the same map in as we had in .)
At this point, an excellent exercise for the reader is to take the quiver with two dots, and and two edges pointing from to , and the representation , and apply and alternately a bunch of times. A harder exercise is to do the same with three arrows.
There is also a functor , which turns representations into representations. In this case, is the kernel of .
Let and denote the representations of and where is one-dimensional and is zero fro . Note that and annihilate and . Write for the category of representations of which do not contain as a summand. (In other words, where the map is injective.) Define similarly.
Key Fact: The functors and provide an equivalence of categories between and . This isomorphism commmutes with direct sum.
That’s some sophisticated language, but the consequences are very down to earth. These functors give a bijection between representations in and , and between irreducible representations in the same. So, if we have an irreducible representation in , we can use these functors to get another one in . If you did the exercise concerning the quiver with two vertices and two edges above, you saw how repeatedly using this trick can give lots of representations.
It is important to understand how effects the dimension vector. Again, we’ll have to restrict our attention to the subcategory to get a nice statement. Being in exactly says that the map is injective, so the dimension of is . There is a nicer way to write this formula. Define the symmetric bilinear form . (So . Some people like to omit the here, but then it turns up somewhere else.) Let be the basis vector of corresponding to . Then, if and are the dimension vectors of and , we have
This is exactly the formula for the reflection in the hyperplane orthogonal to , if we think of as the ordinary dot product. (Which is why $latexs_x^+$ and are called “reflection functors”.)
The same formula relates the dimension vectors of and .
For any and , let’s write . So we have just shown:
Proposition Let be a source of . If is a positive root of , not equal to , then is a positive root of .
But now, recall Theorem 1. By reversing arrows, we can make any vertex a source. So we deduce:
Corollary Let be any vertex of . If is a positive root of , not equal to , then is a positive root of .
What happens if ? Well, , so we obviously can’t interpret it as a dimension vector. (Actually, in the derived category, we can. But that’s way beyond the scope of this post.)
But, if we formally allow negatives of dimension vectors, we get:
Corollary Let be the set of points in so that either or is a positive root. Then is taken to itself by for every vertex of .
The group generated by reflections , , is called the Coxeter group of . The elements of are called roots. When our quiver is positive definite, these are the root systems which John Baez is talking about in his excellent course.
Stop for a moment. There are a lot of ideas here. I strongly recommend that you pause and work them out for some of of your favorite quivers. I recommend for a starter. For more complicated examples, take one central vertex with either three or four other vertices, each of which has a single arrow pointing towards that central vertex. (These are types , and .) Start with a simple representation and start applying reflections until you figure out what all the roots look like and what the corresponding representations are.
Finally, I want to explain Kac’s description of the set of roots. He gives two descriptions. One of them is that is precisely the set of roots of the corresponding Kac-Moody Lie Algebra. Explaining that description would be a whole separate post.
However, he also gives a combinatorial, recursive description which can fit here, and is quite suitable for hand computation.
Let be . Then is the subset of which is built by the following procedure. First, contains for each . Secondly, let and set , where we choose the sign so that is nonnegative. Then , , …, are also in .
Let’s run a tiny example: the quiver . We know that must contain and . Taking and , we have . So this tells us that is in . Taking again, and , we get that and are in . Keeping going this way, we get , , , , , and . And then the process stops. Taking any from this set, and or just gives us more vectors in this set. I encourage the reader to try some larger examples. When your quiver is positive definite, you should see the process terminate. When it is positive semi-definite, but not positive definite, the process will not terminate, but it will quickly settle into a recognizable pattern. When you get outside the positive semi-definite case, the process will explode very quickly.
One interesting thing you will see here is that, for every in , either the are all nonnegative or all nonpositive. From the description of as the union of the positive roots and the negatives of the positive roots, this is obvious. I think that it is very hard to see from the combinatorial description.
As I said, the discussion of roots has to be at the beginning of any discussion of quivers. After that, there are a lot of different directions to go in. What do people want to read about?