Consider two finite dimensional vector spaces and and a linear map between them. Then we can decompose as where is the kernel of and is any subspace transverse to . Similarly, we can write as where is the image of . So we can write as the direct sum of , the identity map from and . At the cost of making some very arbitrary choices, we may simplify even more and say that we can express as the sum of three types of maps: , the identity map and (where is our ground field.)

Now, suppose that we have **two** maps, and from to . We’ll start with the case that and have the same dimension. If is bijective, then we can choose bases for and so that is the identity. Once we have done that, we still have some freedom to change bases further. Assuming that is algebraically closed, we can use this freedom to put into Jordan normal form. In other words, we can choose bases such that are direct sums of maps like

.

(Here several different values may occur in the various summands, and of course, the matrices can be sizes other than .) If we don’t assume that is bijective (and if we want to allow and to have different dimensions) we get a few more cases. But the basic picture is not much worse: in addition to the summands above, we also need to consider the maps

(for various sizes , not just ) and the transpose of these. These three possibilities, and their direct sums, describe all pairs up to isomorphism.

Now, consider the case of **three **maps. As the dimensions of and grow, so do the number of parameters necessary to describe the possible cases. Moreover, almost all cases can not be decomposed as direct sums. More precisely, as long as is between and , the maps which can be expressed as direct sums of simpler maps have measure zero in the . (Where did that number come from? Stay tuned!) In the opinion of experts, there will probably never be any good classification of all triples of maps.

The subject of quivers was invented to systemize this sort of analysis. It’s become a very large subject, so I can’t hope to summarize it in one blog post. But I think it is fair to say that anyone who wants to think about quivers needs to start by learning the connection to root systems. So that’s what I’ll discuss here.

A quiver is simply another name for a directed graph. This means that we have a finite set of dots, called , and a finite bunch of arrows , with each arrow pointing from one element of to another. (Some sources will let an arrow point from a dot to itself, but I’m going to forbid that.) The quiver as a whole is denoted . A *representation* of consists of (1) for each dot , a finite dimensional vector space and (2) for each arrow , a map . Note that there are no relations imposed between these maps. The *dimension vector *of the representation is the vector in whose -th component is . It is common in quiver theory to fix a dimension vector and try to study all representations of that dimension. If we have two representations and of the same quiver, their direct sum, , is defined in the obvious way: and similarly for the maps. A representation is called **indecomposable** if it can not be written as a direct sum of smaller representations. A vector is called a **positive ****root** if it is the dimension vector of an indecomposable representation.

Our three examples above corresponded to the case where had two elements and there were one, two or three arrows (respectively) connecting them, all in the same direction. The positive roots were , and in the first case and , and (for all positive ) in the second.

We can now state a very surprising result, due to Kac:

**Theorem 1: **The set of positive roots is unchanged by reversing edges of .

Kac went much further than that, and gave an explicit description of the set of roots. In the rest of this post, I will explain that result. This post draws heavily on Kac’s paper Infinite root systems, representations of graphs and invariant theory.

## The quadratic form

Let’s fix a dimension vector and try to figure out how many nonisomorphic quiver representations of dimension we expect. Let be where is the space of matrices. So every point in gives a representation of of dimension . (Just view a matrix in as a map from to .) Two points in give isomorphic representations if we can obtain one from the other by changing bases.

It will be worthwhile to write out what “changing bases” means in a very formal way. Let . Then acts on : if is an element of and is an element of then . Isomorphism classes of -dimensional representations of correspond to orbits of on .

Let’s do a dimension count. is a vector space of dimension . The group has dimension . We set . Notice that is unchanged by reversing edges of , consistent with Theorem 1. So, if is very negative, we expect to have a lot of nonisomorphic representations of dimension , because will be much larger than . Since not thatmany representations can be expressed as direct sums, when is negative, we expect to be a positive root.

In the other direction, if is very positive, we expect representations of dimension to have large automorphism groups. Now, every every point in has at least a one dimensional stabilizer: just multiply each vector space by the same scalar. Let be a representation of whose stabilizer is larger than this. Then we should “expect” to be decomposable. I’ll give a more precise statement when our ground field is ; the generalization to fields other than requires the vocabulary of algebraic groups, so I won’t get into it. Let be the stabilizer of in . Then is decomposable if and only if contains a copy of other than the trivial one mentioned above. The proof is simple: if , we can get a nontrivial stabilizer by rescaling and leaving alone. Conversely, suppose we have a nontrivial in the stabilizer, which we’ll write as . Then we have where . So, if , we should expect representations of dimension to decompose as direct sums. In other words, when is positive, we should expect not to be a positive root.

<h2>Kac’s result, special case</h2>

The quadratic form is called **positive semi-definite** if for all . It is called **positive definite** if it is positive semi-definite and only when . Finally, is called **hyperbolic or better** if, for every , the restriction of to the hyperplane is positive semidefinite. “Hyperbolic or better” is my own terminology. The definition of **hyperbolic**, which is a more standard term, is “hyperbolic or better, but not positive semi-definite”.

The cases where is positive definite correspond to the simply-laced Dynkin diagrams: that is , and , , . The correspondence is simply to draw the graph and ignore the directions of the arrows. The positive definite and hyperbolic cases have also been completely classified. (I’m too lazy to draw all those Dynkin diagrams right now. The positive definite types are the diagrams , , , and here, the positive semi-definite quivers, besides the positive definite ones, are the types , , , and here.)

**Theorem (Kac):** Suppose that is hyperbolic or better, and let be a dimension vector. Then:

If , then every representation of of dimension is decomposable.

If , then there is (up to isomorphism) a unique indecomposable representation of dimension .

If and is infinite, then there are infinitely many nonisomorphic indecomposable representations of dimension .

In short, with the hypotheses on in place, is a positive root if and only if .

Let’s look at our starting examples: if is the quiver corresponding to a single map from one vector space to another, then . This is positive definite. The only times that it is as small as are , and . Indeed, these correspond to the three summands that turned up in that case. If corresponds to having two maps, then . This is evidently positive semi-definite. We have when is of the form or , and we found a unique indecomposable representation in that case. We have exactly when for some ; in that case our indecomposable representation depended on a parameter . Finally, when there were three maps, . We can now understand why it was important to have : that’s exactly when . (Exercise: For which integers do we have ?)

Our last example will demonstrate that the heuristics in the previous section were too simple. Consider the quiver with three vertices, , and and three maps , and . (This is NOT cyclically symmetric.) So , which is and hence positive semi-definite. We’ll consider the dimension vector , for which . Kac’s theorem tells us that there should be a single (up to isomorphism) indecomposable representation of this dimension and, indeed, there is: take , and . In this example, has dimension and has dimension . The heuristics of the previous section suggest that the indecomposable representation, above, should lie in a dense orbit, and have only the trivial stabilizer. In fact, the orbit in question is four-dimensional, and the stablizer is . (Exercise!) But this group contains no nontrivial copy of , so the representation is indecomposable as desired.

## Reflection Functors

The reflection functors, discovered by Bernstein, Gelfand and Ponomarev, are an important way to make new quiver representations from old. Let be a source of ; meaning that is a vertex of and every edge bordering is directed away from . Let be the quiver obtained by reversing all edges of which border (and leaving the other edges alone). Let be a representation of . Define to be the following representation of : For every vertex other than , we have . We define to be the cokernel of , where the map is the direct sum of all the individual maps . For each edge of (with corresponding edge in ) we define the map by the composition . (For edges of which do not border , we just use the same map in as we had in .)

At this point, an excellent exercise for the reader is to take the quiver with two dots, and and two edges pointing from to , and the representation , and apply and alternately a bunch of times. A harder exercise is to do the same with three arrows.

There is also a functor , which turns representations into representations. In this case, is the kernel of .

Let and denote the representations of and where is one-dimensional and is zero fro . Note that and annihilate and . Write for the category of representations of which do not contain as a summand. (In other words, where the map is injective.) Define similarly.

**Key Fact**: The functors and provide an equivalence of categories between and . This isomorphism commmutes with direct sum.

That’s some sophisticated language, but the consequences are very down to earth. These functors give a bijection between representations in and , and between irreducible representations in the same. So, if we have an irreducible representation in , we can use these functors to get another one in . If you did the exercise concerning the quiver with two vertices and two edges above, you saw how repeatedly using this trick can give lots of representations.

It is important to understand how effects the dimension vector. Again, we’ll have to restrict our attention to the subcategory to get a nice statement. Being in exactly says that the map is injective, so the dimension of is . There is a nicer way to write this formula. Define the symmetric bilinear form . (So . Some people like to omit the here, but then it turns up somewhere else.) Let be the basis vector of corresponding to . Then, if and are the dimension vectors of and , we have

.

This is exactly the formula for the reflection in the hyperplane orthogonal to , if we think of as the ordinary dot product. (Which is why $latexs_x^+$ and are called “reflection functors”.)

The same formula relates the dimension vectors of and .

For any and , let’s write . So we have just shown:

**Proposition** Let be a source of . If is a positive root of , not equal to , then is a positive root of .

But now, recall Theorem 1. By reversing arrows, we can make any vertex a source. So we deduce:

**Corollary** Let be any vertex of . If is a positive root of , not equal to , then is a positive root of .

What happens if ? Well, , so we obviously can’t interpret it as a dimension vector. (Actually, in the derived category, we can. But that’s way beyond the scope of this post.)

But, if we formally allow negatives of dimension vectors, we get:

**Corollary** Let be the set of points in so that either or is a positive root. Then is taken to itself by for every vertex of .

The group generated by reflections , , is called the Coxeter group of . The elements of are called **roots**. When our quiver is positive definite, these are the root systems which John Baez is talking about in his excellent course.

## A pause

Stop for a moment. There are a lot of ideas here. I strongly recommend that you pause and work them out for some of of your favorite quivers. I recommend for a starter. For more complicated examples, take one central vertex with either three or four other vertices, each of which has a single arrow pointing towards that central vertex. (These are types , and .) Start with a simple representation and start applying reflections until you figure out what all the roots look like and what the corresponding representations are.

## Kac’s Result

Finally, I want to explain Kac’s description of the set of roots. He gives two descriptions. One of them is that is precisely the set of roots of the corresponding Kac-Moody Lie Algebra. Explaining that description would be a whole separate post.

However, he also gives a combinatorial, recursive description which can fit here, and is quite suitable for hand computation.

Let be . Then is the subset of which is built by the following procedure. First, contains for each . Secondly, let and set , where we choose the sign so that is nonnegative. Then , , …, are also in .

Let’s run a tiny example: the quiver . We know that must contain and . Taking and , we have . So this tells us that is in . Taking again, and , we get that and are in . Keeping going this way, we get , , , , , and . And then the process stops. Taking any from this set, and or just gives us more vectors in this set. I encourage the reader to try some larger examples. When your quiver is positive definite, you should see the process terminate. When it is positive semi-definite, but not positive definite, the process will not terminate, but it will quickly settle into a recognizable pattern. When you get outside the positive semi-definite case, the process will explode very quickly.

One interesting thing you will see here is that, for every in , either the are all nonnegative or all nonpositive. From the description of as the union of the positive roots and the negatives of the positive roots, this is obvious. I think that it is very hard to see from the combinatorial description.

## What next?

As I said, the discussion of roots has to be at the beginning of any discussion of quivers. After that, there are a lot of different directions to go in. What do people want to read about?

Some sources will let an arrow point from a dot to itself, but I’m going to forbid that.1. Prude.

2. “…, and some targets will let…”

By reversing arrows, we can make any vertex a source.I presume that’s why you forbade self-arrows. But you’re also screwed by any oriented cycle, no?

What do people want to read about?1. Schur roots.

2. Bobinski-Zwara’s proof that quiver cycles for D_n are Cohen-Macaulay.

Is there a good category-theoretic description of this picture? It looks like representations are diagrams in Vect, but the lack of relations means the source category is somehow freely generated by the quiver arrows, like a category of finite directed paths with concatenation as composition. Is this viewpoint useful?

Scott, as I understand it a quiver representation is a representation of the path category of the graph. That is, it’s a functor from the path category to the category of vector spaces. Useful? I don’t know, but it at least puts quiver representations on a footing with, say, tangle representations or fundamental groupoid representations.

Just to amplify the joint meaning of the above two comments:

A quiver representation is a representation of the free category over the quiver=finite graph.

Here the functor which sends graphs to the free categories over them is the left adjoint to the forgetful functor from categories to graphs, which forgets the composition and unit operations in the category.

And generalizing the way group representations correspond to modules for the corresponding group algebras, category representations correspond to modules of the corresponding category algebra.

And the category algebra of a free category is the quiver algebra of the underlying graph.

Scott: John’s explanation of the correct category theoretic framework is precisely correct. Notice that this is not just a category but an abelian category. The category theoretic language is useful in several ways:

(1) It means that all the language of homological algebra is available. In particular, here are two important results: (a) The quiver category has homological dimension 1. This means that, for any two quiver representations and , we have for . (b) We explicitly know the projective and injective objects in the quiver category. For acyclic quivers, the result of applying reflection at each vertex once (in a certain manner) gives a functor called the Coxeter functor, which takes any semi-simple element to its projective cover.

(2) Categories that show up in other contexts are often isomorphic to the category of representations of some quiver. Here is one example: Let be a quiver, and I might need to impose that it be acyclic, I’m not sure. Let be a nonzero representation of . We define the category to be the full subcategory of the category of quiver representations, whose objects are those representations such that . Then there is a (unique) quiver such that is equivalent to the category of representations of . By the way, I haven’t seen anyone give a combinatorial rule for getting from and ; I’d be interested in seeing one.

Another example, I think, is that the category of constructible sheaves on various algebraic objects often turns out to be equivalent to a category of representations of some quiver. Ben, I believe, has discovered that this occurs in the case of a hypertoric variety; perhaps he’ll write up an example for us. (hint, hint)

(3, and 2 continued) In particular, people now seem to want to work in the derived category of the category of quiver representations, and in a certain quotient category of that called the cluster category. When they do this, they often get the category of constructible sheaves on a Calabi-Yau threefold. I don’t understand this at all, but it seems to have the physicists very excited.

Allen: By reversing arrows, we can make any vertex a source.I presume that’s why you forbade self-arrows. But you’re also screwed by any oriented cycle, no?No, Kac’s result is a lot stronger than that. I can reverse arrows individually, not just by reflection functors.

I don’t know the Bobinski-Zwara result. Schur roots, though, I can probably say something about.

Thanks for the explanations. With all the Exts floating around, it sounds like quiver representations in dgVect might be interesting.

It’s not too surprising that you can encode cycle-free quivers using sheaves on stratified spaces, since the arrows in the quiver can be made to correspond to specialization maps, but I’m a bit surprised that you get categories of constructible sheaves, rather than more restricted objects like combinatorial sheaves (in the sense of Getzler-Kapranov – they are constant on strata, rather than locally constant). I suppose if all of your strata are simply connected, the notions are equivalent. Alternatively, my encoding scheme could be all wrong.

Does nobody find it odd that usually vector spaces are spaces of “something” (say functions) but here, we are doing operations on the vector spaces themselves, treating them like objects?

Not really, john. At that rate, shouldn’t we find it odd that numbers are usually numbers of “something” (say apples), but in arithmetic we are doing operations on the numbers themselves, treating them like objects?

Is it possible for reflection functors to take quivers with the same dimension to quivers of different dimension? I guess I am asking if the reflection functor is well-defined as a map on the space of dimension vectors.

Yes, but only in a very limited way. Suppose that we have a quiver representation M and we want to reflect at a sink v. We can write M as S+N where S is a representation concentrated solely on the vertex v, and N is a representation with the following property: The intersection of the kernels of the maps v –> w, over all arrows v –> w, is zero.

Canonically, we have a short exact sequence

0 –> S –> M –> N –> 0,

which is noncannonically split. (Define S as the intersection of the kernels above.)

Now, the reflection functors preserve direct sums, so the reflection of M is the direct sum of the reflections of S and N. The reflection of S is zero, and the dimension vector of the reflection of N depends only on the dimension vector of N. So, if there is no S-summand, the dimension vector of the reflected representation depends only on that of the original representation.

In particular, the reflection functors preserve the property of having no nontrivial direct sum decomposition so, when acting on such indecomposable representations, the dimension vector of the reflection is determined by the dimension vector of the original.