# Request: Quivers and Roots

Consider two finite dimensional vector spaces $A$ and $B$ and a linear map $\phi$ between them. Then we can decompose $A$ as $K \oplus R$ where $K$ is the kernel of $\phi$ and $R$ is any subspace transverse to $K$. Similarly, we can write $B$ as $I \oplus C$ where $I$ is the image of $\phi$. So we can write $\phi$ as the direct sum of $K \to 0$, the identity map from $R \to I$ and $0 \to C$. At the cost of making some very arbitrary choices, we may simplify even more and say that we can express $\phi$ as the sum of three types of maps: $0 \to k$, the identity map $k \to k$ and $k \to 0$ (where $k$ is our ground field.)

Now, suppose that we have two maps, $\phi$ and $\psi$ from $A$ to $B$. We’ll start with the case that $A$ and $B$ have the same dimension. If $\phi$ is bijective, then we can choose bases for $A$ and $B$ so that $\phi$ is the identity. Once we have done that, we still have some freedom to change bases further. Assuming that $k$ is algebraically closed, we can use this freedom to put $\psi$ into Jordan normal form. In other words, we can choose bases such that $(\phi,\psi)$ are direct sums of maps like

$\left( \left( \begin{smallmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{smallmatrix} \right), \left( \begin{smallmatrix} \alpha & 1 & 0 \\ 0 & \alpha & 1 \\ 0 & 0 & \alpha \end{smallmatrix} \right) \right)$.

(Here several different values $\alpha$ may occur in the various summands, and of course, the matrices can be sizes other than $3 \times 3$.) If we don’t assume that $\phi$ is bijective (and if we want to allow $A$ and $B$ to have different dimensions) we get a few more cases. But the basic picture is not much worse: in addition to the summands above, we also need to consider the maps

$\left( \left( \begin{smallmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{smallmatrix} \right), \left( \begin{smallmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \end{smallmatrix} \right) \right)$

(for various sizes $n \times (n+1)$, not just $2 \times 3$) and the transpose of these. These three possibilities, and their direct sums, describe all pairs $(\phi, \psi)$ up to isomorphism.

Now, consider the case of three maps. As the dimensions of $A$ and $B$ grow, so do the number of parameters necessary to describe the possible cases. Moreover, almost all cases can not be decomposed as direct sums. More precisely, as long as $\mathrm{dim\ } A/\mathrm{dim\ }B$ is between $(3+\sqrt{5})/2$ and $(3-\sqrt{5})/2$, the maps which can be expressed as direct sums of simpler maps have measure zero in the $\mathrm{Hom}(A,B)$. (Where did that number $(3+\sqrt{5})/2$ come from? Stay tuned!) In the opinion of experts, there will probably never be any good classification of all triples of maps.

The subject of quivers was invented to systemize this sort of analysis. It’s become a very large subject, so I can’t hope to summarize it in one blog post. But I think it is fair to say that anyone who wants to think about quivers needs to start by learning the connection to root systems. So that’s what I’ll discuss here.

A quiver is simply another name for a directed graph. This means that we have a finite set of dots, called $Q_0$, and a finite bunch of arrows $Q_1$, with each arrow pointing from one element of $Q_0$ to another. (Some sources will let an arrow point from a dot to itself, but I’m going to forbid that.) The quiver as a whole is denoted $Q$. A representation of $Q$ consists of (1) for each dot $x \in Q_0$, a finite dimensional vector space $V_x$ and (2) for each arrow $x \to y$, a map $f_{xy} : V_x \to V_y$. Note that there are no relations imposed between these maps. The dimension vector of the representation is the vector in $\mathbb{R}^{Q_0}$ whose $x$-th component is $\dim V_x$. It is common in quiver theory to fix a dimension vector and try to study all representations of that dimension. If we have two representations $V$ and $W$ of the same quiver, their direct sum, $V \oplus W$, is defined in the obvious way: $(V \oplus W)_x = V_x \oplus W_x$ and similarly for the maps. A representation is called indecomposable if it can not be written as a direct sum of smaller representations. A vector is called a positive root if it is the dimension vector of an indecomposable representation.

Our three examples above corresponded to the case where $Q_0$ had two elements and there were one, two or three arrows (respectively) connecting them, all in the same direction. The positive roots were $(0,1)$, $(1,1)$ and $(1,0)$ in the first case and $(n-1, n)$, $(n,n)$ and $(n,n-1)$ (for all positive $n$) in the second.

We can now state a very surprising result, due to Kac:

Theorem 1: The set of positive roots is unchanged by reversing edges of $Q$.

Kac went much further than that, and gave an explicit description of the set of roots. In the rest of this post, I will explain that result. This post draws heavily on Kac’s paper Infinite root systems, representations of graphs and invariant theory.

Let’s fix a dimension vector $d \in \mathbb{Z}_{\geq 0}^{Q_0}$ and try to figure out how many nonisomorphic quiver representations of dimension $d$ we expect. Let $\mathrm{Rep}(Q,d)$ be $\prod_{(x \to y) \in Q_1} \mathrm{Mat(d_x, d_y)}$ where $\mathrm{Mat}(a,b)$ is the space of $a \times b$ matrices. So every point in $\mathrm{Rep}(Q,d)$ gives a representation of $Q$ of dimension $d$. (Just view a matrix in $\mathrm{Mat}(d_x, d_y)$ as a map from $k^{d_x}$ to $k^{d_y}$.) Two points in $\mathrm{Rep}(Q,d)$ give isomorphic representations if we can obtain one from the other by changing bases.

It will be worthwhile to write out what “changing bases” means in a very formal way. Let $\mathrm{GL}(Q,d) = \prod_{x \in Q_0} \mathrm{GL}(d_x)$. Then $\mathrm{GL}(Q,d)$ acts on $\mathrm{Rep}(Q,d)$: if $(g_x)_{x \in Q_0}$ is an element of $\mathrm{GL}(Q,d)$ and $(f_{xy})_{(x \to y) \in Q_1}$ is an element of $\mathrm{Rep}(Q,d)$ then $f \cdot g = ( g_y \circ f_{xy} \circ g_x^{-1} )_{(x \to y) \in Q_1}$. Isomorphism classes of $d$-dimensional representations of $Q$ correspond to orbits of $\mathrm{GL}(Q,d)$ on $\mathrm{Rep}(Q,d)$.

Let’s do a dimension count. $\mathrm{Rep}(Q,d)$ is a vector space of dimension $\sum_{(x \to y) \in Q_1} d_x d_y$. The group $\mathrm{GL}(d)$ has dimension $\sum_{x \in Q_0} d_x^2$. We set $T(d) = \sum_{x \in Q_0} d_x^2 - \sum_{x \to y} d_x d_y$. Notice that $T(d)$ is unchanged by reversing edges of $Q$, consistent with Theorem 1. So, if $T(d)$ is very negative, we expect to have a lot of nonisomorphic representations of dimension $d$, because $\mathrm{Rep}(Q,d)$ will be much larger than $\mathrm{GL}(Q,d)$. Since not thatmany representations can be expressed as direct sums, when $T(d)$ is negative, we expect $d$ to be a positive root.

In the other direction, if $T(d)$ is very positive, we expect representations of dimension $d$ to have large automorphism groups. Now, every every point in $\mathrm{Rep}(Q,d)$ has at least a one dimensional stabilizer: just multiply each vector space $V_x$ by the same scalar. Let $V$ be a representation of $Q$ whose stabilizer is larger than this. Then we should “expect” $V$ to be decomposable. I’ll give a more precise statement when our ground field is $\mathbb{C}$; the generalization to fields other than $\mathbb{C}$ requires the vocabulary of algebraic groups, so I won’t get into it. Let $\mathrm{Stab}(V)$ be the stabilizer of $V$ in $\mathrm{GL}(Q,d)$. Then $V$ is decomposable if and only if $\mathrm{Stab}(V)$ contains a copy of $\mathbb{C}^*$ other than the trivial one mentioned above. The proof is simple: if $V = X \oplus Y$, we can get a nontrivial stabilizer by rescaling $X$ and leaving $Y$ alone. Conversely, suppose we have a nontrivial $\mathbb{C}^*$ in the stabilizer, which we’ll write as $\rho: \mathbb{C}^* \to \mathrm{GL}(Q,d)$. Then we have $V = \bigoplus_{i= -\infty}^{\infty} V^i$ where $V^i_x = \{ v \in V_x : \rho(t)(v) = t^i v \}$. So, if $T(d) > 1$, we should expect representations of dimension $d$ to decompose as direct sums. In other words, when $T(d)$ is positive, we should expect $d$ not to be a positive root.

<h2>Kac’s result, special case</h2>

The quadratic form $T$ is called positive semi-definite if $T(d) \geq 0$ for all $d \in \mathbb{R}^{Q_0}$. It is called positive definite if it is positive semi-definite and $T(d)=0$ only when $d=0$. Finally, $T$ is called hyperbolic or better if, for every $x \in Q_0$, the restriction of $T$ to the hyperplane $\{ d: d_x=0 \}$ is positive semidefinite. “Hyperbolic or better” is my own terminology. The definition of hyperbolic, which is a more standard term, is “hyperbolic or better, but not positive semi-definite”.

The cases where $T$ is positive definite correspond to the simply-laced Dynkin diagrams: that is $A_n$, $D_n$ and $E_6$, $E_7$, $E_8$. The correspondence is simply to draw the graph $Q$ and ignore the directions of the arrows. The positive definite and hyperbolic cases have also been completely classified. (I’m too lazy to draw all those Dynkin diagrams right now. The positive definite types are the diagrams $A_n$, $D_n$, $E_6$, $E_7$ and $E_8$ here, the positive semi-definite quivers, besides the positive definite ones, are the types $\tilde{A}_n$, $\tilde{D}_n$, $\tilde{E}_6$, $\tilde{E}_7$ and $\tilde{E}_8$ here.)

Theorem (Kac): Suppose that $T$ is hyperbolic or better, and let $d$ be a dimension vector. Then:

If $T(d)>1$, then every representation of $Q$ of dimension $d$ is decomposable.
If $T(d)=1$, then there is (up to isomorphism) a unique indecomposable representation $Q$ of dimension $d$.
If $T(d) <1$ and $k$ is infinite, then there are infinitely many nonisomorphic indecomposable representations of dimension $d$.

In short, with the hypotheses on $T$ in place, $d \in (\mathbb{Z}_{\geq 0})^{Q_0}$ is a positive root if and only if $T(d) \leq 1$.

Let’s look at our starting examples: if $Q$ is the quiver corresponding to a single map from one vector space to another, then $T((d,e)) = d^2 -de + e^2$. This is positive definite. The only times that it is as small as ${1}$ are $(1,0)$, $(1,1)$ and $(0,1)$. Indeed, these correspond to the three summands that turned up in that case. If $Q$ corresponds to having two maps, then $T((d,e))=d^2-2de+e^2 = (d-e)^2$. This is evidently positive semi-definite. We have $T((d,e))=1$ when $(d,e)$ is of the form $(n,n+1)$ or $(n+1,n)$, and we found a unique indecomposable representation in that case. We have $T((d,e))=0$ exactly when $(d,e)=(n,n)$ for some $n$; in that case our indecomposable representation depended on a parameter $\alpha$. Finally, when there were three maps, $T((d,e))=d^2 - 3de + e^2$. We can now understand why it was important to have $(3 - \sqrt{5}) /2 < d/e < (3+\sqrt{5})/2$: that’s exactly when $T((d,e))<0$. (Exercise: For which integers do we have $d^2 - 3de + e^2 =1$?)

Our last example will demonstrate that the heuristics in the previous section were too simple. Consider the quiver with three vertices, $x$, $y$ and $z$ and three maps $x \to y$, $y \to z$ and $x \to z$. (This is NOT cyclically symmetric.) So $T((d,e,f))=d^2 + e^2 + f^2 - de -df-ef$, which is $(1/2)\left( (d-e)^2 + (d-f)^2 + (e-f)^2 \right)$ and hence positive semi-definite. We’ll consider the dimension vector $(1,2,1)$, for which $T=1$. Kac’s theorem tells us that there should be a single (up to isomorphism) indecomposable representation of this dimension and, indeed, there is: take $f_{xy} = \left( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \right)$, $f_{yz} = \left( \begin{smallmatrix} 0 & 1 \end{smallmatrix} \right)$ and $f_{xz} = \left( \begin{smallmatrix} 1 \end{smallmatrix} \right)$. In this example, $\mathrm{Rep}(Q,d)$ has dimension ${5}$ and $\mathrm{GL}(Q,d)$ has dimension ${6}$. The heuristics of the previous section suggest that the indecomposable representation, above, should lie in a dense $\mathrm{GL}(Q,d)$ orbit, and have only the trivial stabilizer. In fact, the orbit in question is four-dimensional, and the stablizer is $k^{\times} \ltimes k^{+}$. (Exercise!) But this group contains no nontrivial copy of $k^{\times}$, so the representation is indecomposable as desired.

## Reflection Functors

The reflection functors, discovered by Bernstein, Gelfand and Ponomarev, are an important way to make new quiver representations from old. Let $x$ be a source of $Q$; meaning that $x$ is a vertex of $Q$ and every edge bordering $x$ is directed away from $x$. Let $Q'$ be the quiver obtained by reversing all edges of $Q$ which border $x$ (and leaving the other edges alone). Let $V$ be a representation of $Q$. Define $s_x^{+}(V)$ to be the following representation of $Q'$: For every vertex $y$ other than $x$, we have $s_x^{+}(V)_y = V_y$. We define $s_x^{+}(V)_x$ to be the cokernel of $V_x \to \bigoplus_{x \to y} V_y$, where the map is the direct sum of all the individual maps $V_x \to V_y$. For each edge $x \to y$ of $Q$ (with corresponding edge $y \to x$ in $Q'$) we define the map $s_x^{+}(V)_y \to s_x^{+}(V)_{x}$ by the composition $V_y \to \bigoplus_{x \to y} V_y \to s_x^{+}(V)_x$. (For edges $y \to z$ of $Q'$ which do not border $x$, we just use the same map in $Q'$ as we had in $Q$.)

At this point, an excellent exercise for the reader is to take the quiver with two dots, ${1}$ and ${2}$ and two edges pointing from ${1}$ to ${2}$, and the representation $0 \to k$, and apply $s^{+}_1$ and $s^{+}_2$ alternately a bunch of times. A harder exercise is to do the same with three arrows.

There is also a functor $s^{-}_x$, which turns $Q'$ representations into $Q$ representations. In this case, $s_x^{-}(V)_x$ is the kernel of $\bigoplus_{x \to y} V_y \to V_x$.

Let $S_x$ and $S'_x$ denote the representations of $Q$ and $Q'$ where $V_x$ is one-dimensional and $V_y$ is zero fro $y \neq x$. Note that $s^{+}_x$ and $s^{-}_x$ annihilate $S_x$ and $S'_x$. Write $N_x$ for the category of representations of $Q$ which do not contain $S_x$ as a summand. (In other words, where the map $V_x \to \bigoplus_{x \to y} V_y$ is injective.) Define $N'_x$ similarly.

Key Fact: The functors $s^{+}_x$ and $s^{-}_x$ provide an equivalence of categories between $N_x$ and $N'_x$. This isomorphism commmutes with direct sum.

That’s some sophisticated language, but the consequences are very down to earth. These functors give a bijection between representations in $N_x$ and $N'_x$, and between irreducible representations in the same. So, if we have an irreducible representation in $N_x$, we can use these functors to get another one in $N'_x$. If you did the exercise concerning the quiver with two vertices and two edges above, you saw how repeatedly using this trick can give lots of representations.

It is important to understand how $s_x^{+}$ effects the dimension vector. Again, we’ll have to restrict our attention to the subcategory $N_x$ to get a nice statement. Being in $N_x$ exactly says that the map $V_x \to \bigoplus_{x \to y} V_y$ is injective, so the dimension of $s^{+}_x(V)_x$ is $\sum_{x \to y} \dim V_y - \dim V_x$. There is a nicer way to write this formula. Define the symmetric bilinear form $A(u,v) := T(u+v) - T(u) - T(v)$. (So $T(v) = (1/2) A(v,v)$. Some people like to omit the ${2}$ here, but then it turns up somewhere else.) Let $e_x$ be the basis vector of $\mathbb{R}^{Q_0}$ corresponding to $x$. Then, if $d$ and $d'$ are the dimension vectors of $V$ and $s^+_x(V)$, we have

$d' = d - A(d, e_x) e_x$.

This is exactly the formula for the reflection in the hyperplane orthogonal to $e_x$, if we think of $A$ as the ordinary dot product. (Which is why $latexs_x^+$ and $s_x^-$ are called “reflection functors”.)

The same formula relates the dimension vectors of $V'$ and $s^-_x(V')$.

For any $x \in Q_0$ and $d \in \mathbb{R}^{Q_0}$, let’s write $s_x(d) := d - A(d, e_x) e_x$. So we have just shown:

Proposition Let $x$ be a source of $Q$. If $d$ is a positive root of $Q$, not equal to $e_x$, then $s_x(d)$ is a positive root of $Q'$.

But now, recall Theorem 1. By reversing arrows, we can make any vertex a source. So we deduce:

Corollary Let $x$ be any vertex of $Q$. If $d$ is a positive root of $Q$, not equal to $e_x$, then $s_x(d)$ is a positive root of $Q$.

What happens if $d=e_x$? Well, $s_x(e_x) = -e_x$, so we obviously can’t interpret it as a dimension vector. (Actually, in the derived category, we can. But that’s way beyond the scope of this post.)
But, if we formally allow negatives of dimension vectors, we get:

Corollary Let $\Phi$ be the set of points $d$ in $\mathbb{R}^{Q_0}$ so that either $d$ or $-d$ is a positive root. Then $\Phi$ is taken to itself by $s_x$ for every vertex $x$ of $Q$.

The group generated by reflections $s_x$, $x \in Q_0$, is called the Coxeter group of $Q$. The elements of $\Phi$ are called roots. When our quiver is positive definite, these are the root systems which John Baez is talking about in his excellent course.

## A pause

Stop for a moment. There are a lot of ideas here. I strongly recommend that you pause and work them out for some of of your favorite quivers. I recommend $1 \to 2 \to 3$ for a starter. For more complicated examples, take one central vertex with either three or four other vertices, each of which has a single arrow pointing towards that central vertex. (These are types $A_3$, $D_4$ and $\tilde{D}_4$.) Start with a simple representation and start applying reflections until you figure out what all the roots look like and what the corresponding representations are.

## Kac’s Result

Finally, I want to explain Kac’s description of the set of roots. He gives two descriptions. One of them is that $\Phi$ is precisely the set of roots of the corresponding Kac-Moody Lie Algebra. Explaining that description would be a whole separate post.

However, he also gives a combinatorial, recursive description which can fit here, and is quite suitable for hand computation.

Let $\Phi'$ be $\Phi \cup \{ 0 \}$. Then $\Phi'$ is the subset of $\mathbb{Z}^{Q_0}$ which is built by the following procedure. First, $\Phi'$ contains $e_x$ for each $x \in Q_0$. Secondly, let $v \in \Phi'$ and set $\pm r = A(e_x, v)$, where we choose the sign $\pm$ so that $r$ is nonnegative. Then $v \mp e_x$, $v \mp 2 e_x$, …, $v \mp r e_x$ are also in $\Phi'$

Let’s run a tiny example: the quiver $1 \to 2$. We know that $\Phi'$ must contain $(1,0)$ and $(0,1)$. Taking $v=(1,0)$ and $x=2$, we have $A(v, e_x) = - 1$. So this tells us that $(1,0) + (0,1) = (1,1)$ is in $\Phi'$. Taking $v=(1,0)$ again, and $x=1$, we get that $(0,0)$ and $(-1,0)$ are in $\Phi'$. Keeping going this way, we get $(1,0)$, $(1,1)$, $(0,-1)$, $(0,0)$, $(0,1)$, $(-1,-1)$ and $(-1,0)$. And then the process stops. Taking any $v$ from this set, and $x=1$ or ${2}$ just gives us more vectors in this set. I encourage the reader to try some larger examples. When your quiver is positive definite, you should see the process terminate. When it is positive semi-definite, but not positive definite, the process will not terminate, but it will quickly settle into a recognizable pattern. When you get outside the positive semi-definite case, the process will explode very quickly.

One interesting thing you will see here is that, for every $(d_1, d_2, \ldots, d_n)$ in $\Phi'$, either the $d_i$ are all nonnegative or all nonpositive. From the description of $\Phi'$ as the union of the positive roots and the negatives of the positive roots, this is obvious. I think that it is very hard to see from the combinatorial description.

## What next?

As I said, the discussion of roots has to be at the beginning of any discussion of quivers. After that, there are a lot of different directions to go in. What do people want to read about?

## 12 thoughts on “Request: Quivers and Roots”

1. Allen Knutson says:

Some sources will let an arrow point from a dot to itself, but I’m going to forbid that.

1. Prude.
2. “…, and some targets will let…”

By reversing arrows, we can make any vertex a source.

I presume that’s why you forbade self-arrows. But you’re also screwed by any oriented cycle, no?

1. Schur roots.
2. Bobinski-Zwara’s proof that quiver cycles for D_n are Cohen-Macaulay.

2. Is there a good category-theoretic description of this picture? It looks like representations are diagrams in Vect, but the lack of relations means the source category is somehow freely generated by the quiver arrows, like a category of finite directed paths with concatenation as composition. Is this viewpoint useful?

3. Scott, as I understand it a quiver representation is a representation of the path category of the graph. That is, it’s a functor from the path category to the category of vector spaces. Useful? I don’t know, but it at least puts quiver representations on a footing with, say, tangle representations or fundamental groupoid representations.

4. Just to amplify the joint meaning of the above two comments:

A quiver representation is a representation of the free category over the quiver=finite graph.

Here the functor which sends graphs to the free categories over them is the left adjoint to the forgetful functor from categories to graphs, which forgets the composition and unit operations in the category.

And generalizing the way group representations correspond to modules for the corresponding group algebras, category representations correspond to modules of the corresponding category algebra.

And the category algebra of a free category is the quiver algebra of the underlying graph.

5. Scott: John’s explanation of the correct category theoretic framework is precisely correct. Notice that this is not just a category but an abelian category. The category theoretic language is useful in several ways:

(1) It means that all the language of homological algebra is available. In particular, here are two important results: (a) The quiver category has homological dimension 1. This means that, for any two quiver representations $V$ and $W$, we have $\mathrm{Ext}^i(V,W)=0$ for $i \geq 2$. (b) We explicitly know the projective and injective objects in the quiver category. For acyclic quivers, the result of applying reflection at each vertex once (in a certain manner) gives a functor called the Coxeter functor, which takes any semi-simple element to its projective cover.

(2) Categories that show up in other contexts are often isomorphic to the category of representations of some quiver. Here is one example: Let $Q$ be a quiver, and I might need to impose that it be acyclic, I’m not sure. Let $V$ be a nonzero representation of $Q$. We define the category $V^{\perp}$ to be the full subcategory of the category of quiver representations, whose objects are those representations $W$ such that $\mathrm{Hom}(V,W) = \mathrm{Ext}(V,W) = 0$. Then there is a (unique) quiver $Q'$ such that $V^{\perp}$ is equivalent to the category of representations of $Q'$. By the way, I haven’t seen anyone give a combinatorial rule for getting $Q'$ from $V$ and $Q$; I’d be interested in seeing one.

Another example, I think, is that the category of constructible sheaves on various algebraic objects often turns out to be equivalent to a category of representations of some quiver. Ben, I believe, has discovered that this occurs in the case of a hypertoric variety; perhaps he’ll write up an example for us. (hint, hint)

(3, and 2 continued) In particular, people now seem to want to work in the derived category of the category of quiver representations, and in a certain quotient category of that called the cluster category. When they do this, they often get the category of constructible sheaves on a Calabi-Yau threefold. I don’t understand this at all, but it seems to have the physicists very excited.

6. Allen: By reversing arrows, we can make any vertex a source.

I presume that’s why you forbade self-arrows. But you’re also screwed by any oriented cycle, no?

No, Kac’s result is a lot stronger than that. I can reverse arrows individually, not just by reflection functors.

I don’t know the Bobinski-Zwara result. Schur roots, though, I can probably say something about.

7. Scott Carnahan says:

Thanks for the explanations. With all the Exts floating around, it sounds like quiver representations in dgVect might be interesting.

It’s not too surprising that you can encode cycle-free quivers using sheaves on stratified spaces, since the arrows in the quiver can be made to correspond to specialization maps, but I’m a bit surprised that you get categories of constructible sheaves, rather than more restricted objects like combinatorial sheaves (in the sense of Getzler-Kapranov – they are constant on strata, rather than locally constant). I suppose if all of your strata are simply connected, the notions are equivalent. Alternatively, my encoding scheme could be all wrong.

8. Does nobody find it odd that usually vector spaces are spaces of “something” (say functions) but here, we are doing operations on the vector spaces themselves, treating them like objects?

9. Not really, john. At that rate, shouldn’t we find it odd that numbers are usually numbers of “something” (say apples), but in arithmetic we are doing operations on the numbers themselves, treating them like objects?

10. Is it possible for reflection functors to take quivers with the same dimension to quivers of different dimension? I guess I am asking if the reflection functor is well-defined as a map on the space of dimension vectors.

11. David Speyer says:

Yes, but only in a very limited way. Suppose that we have a quiver representation M and we want to reflect at a sink v. We can write M as S+N where S is a representation concentrated solely on the vertex v, and N is a representation with the following property: The intersection of the kernels of the maps v –> w, over all arrows v –> w, is zero.

Canonically, we have a short exact sequence

0 –> S –> M –> N –> 0,

which is noncannonically split. (Define S as the intersection of the kernels above.)

Now, the reflection functors preserve direct sums, so the reflection of M is the direct sum of the reflections of S and N. The reflection of S is zero, and the dimension vector of the reflection of N depends only on the dimension vector of N. So, if there is no S-summand, the dimension vector of the reflected representation depends only on that of the original representation.

In particular, the reflection functors preserve the property of having no nontrivial direct sum decomposition so, when acting on such indecomposable representations, the dimension vector of the reflection is determined by the dimension vector of the original.