So, I felt I should try to explain a bit about this hypertoric stuff. I think the first order of business should be explaining what Gale duality is. This is all “just” linear algebra, in principle very simple, though not exactly the usual perspective on these things. To slightly paraphrase some notes from Daniel Kleitman’s website:
The act of writing down explicit dual equations of a given hyperplane arrangement is not complicated but it is quite unnatural to human beings. You can expect to do it hesitantly and incorrectly the first three times you try to do it.
So, first of all, consider a vector arrangement where is your favorite vector space over your favorite coefficient field . This is just an arbitrary set of non-zero vectors (with repetition allowed), though, for simplicity, I’ll assume it spans all of . You could also think of this as a hyperplane arrangement on , by looking at the zero set of thought of as a function on .
Now, let be the vector space of solutions to the equation , where . This is not just a vector space, but one equipped with a set distinguished functions . That is, is a new vector arrangement, which we call the Gale dual to .
To reiterate: given a hyperplane arrangement with fixed normal vectors to its hyperplanes, the Gale dual is the space of vanishing linear combinations of these normal vectors, with the hyperplane arrangement consisting of the subspaces where the th coefficient vanishes.
Well, that was simple. In fact, why am I making such a fuss about this stuff? (Because it’s awesome? Do we need more of a reason?)
Well, to try to convince you something interesting is going on here, let’s think a bit about how to translate statements back and forth between Gale dual arrangments. Note that we have a bijetion since both are indexed by the numbers 1 up to . Let be an arbitrary subset. Then we define to be the complement of .
Theorem. For all , the vectors in are linearly independent if and only if spans .
In particular, is a basis if and only if is.
Proof. Linear independence implies that for any non-zero element of , there is a vector not lying in which has a non-zero coefficient, that is, there is an which doesn’t vanish there. This implies that spans . The opposite direction is an similar exercise. QED!
Note: to those of you who like matroids, this tells you how to define Gale duality of matroids. Since not all matroids are representable, this is actually a slightly richer definition.
Now, what I’ve told you about thus far is central hyperplane arrangments, ones where all the hyperplanes are linear (pass through the origin). But usually we prefer to think about more general hyperplane arrangements where the hyperplanes allowed to be affine subspaces.
To get these, rather than looking at the zero sets of , choose a vector and look at the level sets , then you’ll get an affine hyperplane arrangement.
However, a bunch of these are just translates of each other. If one picks a vector of the form , you’ll just have picked a different origin. Thus, we should mod out by this subspace, and think about our hyperplanes arrangements up to translation, determined by elements .
Now, if you think for a moment, you see that you can identify , so applying Gale duality to an arrangement of affine hyperplanes gives us a hyperplane arrangment with an extra function on it.
Now, let’s specialize to . Now, we have an order, so we can ask questions like:
Pick a sign vector . Is the region defined by the inequalities non-empty?
Naively, these inequalities constrain to lie on a particular side of each hyperplane.
We define the dual region to by .
A theorem that’s very important for our work says that this question can be understood in terms of Gale duality. This is a fact linear programmers learn with their mother’s milk, but which doesn’t seem to be particularly well-known amongst mathematicians in general, even though it just uses basic concepts of linear algebra.
Theorem (weak duality of linear programs): is non-empty if and only if achieves a minimum on .
Now, this is a pretty cool theorem (if you ask me), but in order to state the stronger duality theorems of linear programs, we need a slightly different class of objects: a linear program is
- a vector arrangement .
- two vectors
A solution to a linear program is an element of the feasible region
and we can define a function on by
We define the dual program by dualizing the vector arrangement, and switching the places of and . Let be the feasible region of the dual.
Weak duality above tells us when these regions are non-empty, but in fact that is a much stronger duality theorem which is rather useful if you actually care about bounding the value of (which applied mathematicians do, very very much).
Theorem (strong duality): If either a maximum of on or a minimum of on exists, than the other does, and they are equal.
Exercise: Show that weak duality follows from strong duality.
Thus, exhibiting any point of the feasible region of the dual gives a bound on the original problem. Cool, no?
I don’t have time to discuss it, but one cool application of this is the theorem of Ford, Fulkerson, Elias, Feinstein, and Shannon that the maximal amount of flow through a network is the value of a minimal cut.