Bleg: Pairings into Vector Spaces

Here is a very basic question that has come up in some work I’m doing with Diane Maclagan. There is lots of algebraic geometry in our intended application, but I think that what I really need is a better understanding of the underlying linear algebra. First, let me review some even more basic ideas. Let V and W be two finite dimensional vector spaces over a field k and let \langle \ , \ \rangle : V \times W \to k be a bilinear pairing. For A a subspace of V, define the subspace A^{\perp} to be the space of those vectors b \in W such that \langle a,b \rangle=0 for all a \in A. We can also define B^{\perp} for any subspace B of W.

Then the “Fundamental Theorem of Bilinear Pairings” is the following: for any subspace A of V, we have (A^{\perp})^{\perp}=A+W^{\perp}. In particular, A=(A^{\perp})^{\perp} if and only if A \supseteq W^{\perp}

OK, that was pretty simple. My situation is that, instead of having a pairing down to the ground field, we have a bilinear pairing V \times W \to U into another finite dimensional vector space. 

What is the new fundamental theorem characterizing (A^{\perp})^{\perp}? In particular, under what conditions do we have (A^{\perp})^{\perp}=A?

Many thanks!


15 thoughts on “Bleg: Pairings into Vector Spaces

  1. The original theorem says that a vector can be perpendicular to the perpendicular space in two ways: it’s in A already, or it’s perpendicular to everything in the other space.

    So can’t you consider your general bilinear pairing as a collection of bilinear pairings to the ground field? The ugly way to do it is just to pick a basis. Then applying the theorem to each of the component pairings don’t you get the analogous fact over again? Either a vector is already in the subspace A, or it’s perpendicular (in each component way) to everything in W.

  2. Rather than picking a basis, one should be more categorical and just talk about all (linear) maps U\to k. Any such map composes with a U-valued pairing, and an element is 0 in U if and only if it is 0 under each map.

  3. Theo, that’s a better way of doing it, of course. I knew that my technique was ugly, but it could suffice to patch until the elegant version came along.

  4. But the argument in comment #1 doesn’t work. It only gives you a lower bound on the size of A-perp-perp. Suppose that you have two generic, non-degenerate bilinear forms b and c, and suppose that V = W and A is half-dimensional. Then A-perp = A-bperp cap A-cperp, which is generically {0}. So generically A-perp-perp = V, even though A-bperp-bperp = A-cperp-cperp = A and indeed V-bperp = V-cperp = {0}.

    Let’s look more carefully at the interpretation that you have a pencil of scalar bilinear forms. Suppose that V is finite-dimensional and the same dimension as W, and that some distinguished element b of this pencil is a non-degenerate form. Then you can use b^{-1} to convert the other forms in the pencil to operators on V. I think that if A is invariant under all of these operators, then you can conclude that A-perp-perp = A.

    But I admit that this is off the cuff and I’m not thinking it through all that carefully either. Some such construction has to work though.

  5. Can’t you make a reduction to the standard fundamental theorem? That is, a pairing V x W –> U is tantamount to a pairing

    V x (W \otimes U*) –> k

    and so for subspaces A of V, we have
    A-perpperp = A + (W \otimes U*)-perp. The “perp” we’re using here is a standard sort of perp used for k-valued pairings, but for the pairing displayed above this A-perpperp should match the A double Uperp used for U-valued pairings. And (W \otimes U*)-perp as a subspace of V should be the same thing as W-(Uperp), i.e., the kernel of V –> Hom(W, U). So that in the end,

    A-double Uperp = A + W-(Uperp).

    Hmm… right?

  6. Hmm… not so sure now about the identification of the two double perps in my comment above; looks like there may be some quantifier confusion here. So I should let it cook a little longer.

  7. Can’t you make a reduction to the standard fundamental theorem?

    No, because A-perp in W tensor U^* is very different from A-perp in W. For instance, as U grows, the former gets bigger, while the latter gets smaller.

  8. Thanks for all the replies so far!

    I’m beginning to think that there may not be a good answer. (Which means that I need to use specific information about my situation, rather than use a general result.) The only non-obvious true fact I can find is that ((A^{\perp})^{perp})^{\perp}=A^{\perp}.

    For example, I can’t even guess whether the set of A such that (A^{\perp})^{\perp}=A is closed or not.

  9. You could extend your original field k to the fraction field k(U). Perhaps more reasonably, you could do something closer to what Todd suggested. For every integer $d$, you could consider $A^\perp_d$ and $(A^\perp)^\perp_d$ defined to be the perps for the pairing
    V \times (W\otimes_k \text{Sym}^d(U)) \rightarrow \text{Sym}^{d+1}(U).
    For $d$ sufficiently large, $(A^\perp)^\perp_d$ should become $A+W^\perp$. Perhaps it is profitable to study the sequence of subspaces $(A^\perp)^\perp_d$.

    I realize this is off-topic, but this discussion reminds me of something that came up when I was a student. We used to discuss the problem of classifying elements of $U\otimes V \otimes W$ up to $\textbf{GL}(U)\times \textbf{GL}(V)\times \textbf{GL}(W)$ analogous to the (easy) classification of elements of $U\otimes V$ up to $\textbf{GL}(U) \times \textbf{GL}(V)$ (which are classified by the rank of the associated matrix determined by any bases of $U$ and $V$). There was a “folk theorem” that every projective variety could be “encoded” in a $\textbf{GL}(U)\times \textbf{GL}(V)\times \textbf{GL}(W)$-orbit for some $U$, $V$ and $W$.

    One precise formulation would be a “natural” rule associating to every triple $(U,V,W)$ a vector space $T$ and a closed subscheme $Y$ of $\mathbb{P}(U\otimes V \otimes W) \times \mathbb{P}(T)$ such that every projective scheme $X$ is isomorphic to a fiber $Y_a$ for some $(U,V,W)$ and $a$ in $U\otimes V \otimes W$. Of course “natural” should at least mean the construction is $\textbf{GL}(U)\times \textbf{GL}(V) \times \textbf{GL}(W)$-equivariant. Something close is an easy exercise: every projective variety is isomorphic to the set of isotropic vectors for a symmetric bilinear pairing $V \times V \rightarrow U$. But I still do not know if the “folk theorem” is true. For instance, is every projective variety isomorphic to the rank $1$ locus of a matrix of linear forms?

  10. I realize it is really bad form to pose a question and then answer it yourself. But I realized the argument that shows every projective variety is isomorphic to one cut out by quadratic equations actually shows the other thing I asked: every variety is isomorphic to the rank 1 locus of a matrix of linear forms on a projective space.

    The point is that the image of the d-uple Veronese map is the rank 1 locus of a matrix of linear forms, at least over a characteristic 0 field. Simply consider the n+1 partial derivatives of a homogeneous polynomial of degree d in n+1 variables. Over the projective space of homogeneous degree d polynomials, partial differentiation gives a linear transformation from the n+1 dimensional vector space of 1st order linear differential operators
    to the vector space of degree d-1 polynomials (twisted by $O(1)$ on the projective space). This linear transformation has rank 1 precisely on the locus of degree d polynomials which are pure d powers of linear polynomials, i.e., on the image of the Veronese map.

  11. Let’s say for simplicity that V=W, that U is a two-dimensional space k^2, and the bilinear pairing is the direct sum of two non-degenerate bilinear forms g, h: V \times V \to k (cf. John’s suggestion of using a basis). Then we have A^\perp = g(\hbox{ann}(A)) \cap h(\hbox{ann}(A)), where \hbox{ann}(A) := \{ y \in V^*: y(x)=0 \hbox{ for all } x \in A \} and g, h: V^* \to V are the maps associated to g, h. (Let’s take g, h to be symmetric to avoid some irrelevant notational ambiguities.) Taking double duals, we thus see that

    (A^\perp)^\perp = g( g^{-1}(A) + h^{-1}(A) ) \cap h( g^{-1}(A) + h^{-1}(A) ) = (A + g(h^{-1}(A)) ) \cap  (A + h(g^{-1}(A)).

    From this we see that (A^\perp)^\perp contains A (which was obvious anyway), but could well contain more stuff as well if A, g, and h are positioned suitably. But if A is low dimensional and A, g, h are generic then this shouldn’t happen. We also see that things simplify if A is invariant under gh^{-1}, as in Greg’s comment. But in general I doubt that there is any particularly clean fundamental theorem lurking in this setup.

  12. I don’t know the answer, but I bet that people do… You should look up multilinear algebra: you essentially have a trilinear form on V x W x U^*. Unfortunately the theory is not nearly as nice as the theory of bilinear forms; for instance, there are several different notions of “rank”. Another connection is to restrict to symmetric forms with V = W. Then you have a pencil of quadrics on V. Again there’s a large but not particulary simple theory.

  13. I should give people a quick update: Thanks for all of the replies above. The main thing that I learned was that this problem was complicated and I shouldn’t expect a nice answer. This encouraged me to go back and try harder to beak Diane and my original conjecture; we now have several counter-examples. We are now in that frustrating but exciting stage of suspecting there is a theorem to be proved but not knowing what that theorem should be.

    Thank you all again!

  14. David: I have been meaning to qualify the main conclusion that there isn’t a nice answer, but I haven’t had the time to think through a complete explanation. What I want to say is that there can be a natural reason that A-perp-perp = A, if the bilinear form that you have is a scalar form in disguise. Let’s call the form “reflexive” if it has this property for some class of A’s.

    A special case that illustrates the basic idea: If F is a subfield of a field K, and you have a non-degenerate scalar bilinear form over K, then you can apply the forgetful functor to obtain a vector-valued form which is reflexive, for those A’s which happen to be K-modules.

    You can generalize this construction by letting K be an F-algebra instead, and letting V, W, U, and A be K-modules with some favorable properties. The basic question which I didn’t fully work out is, if K is a ring and V, W, and U are some modules, what are the natural conditions under which a non-singular U-valued bilinear form on V times W is reflexive for K-submodules A? For example, suppose that K = \Z, the integers. Then if V and W are free, and U is free of rank 1, and V/A is free too, then the form is reflexive for this class of A’s. This seems like the right level of generality for this particular ring.

    I’m also thinking, although again I’ve been too lazy to check, that you can let K be a group algebra, maybe a semisimple group algebra, and let U be an irrep of K.

Comments are closed.