I’m closing in on the source of my confusion. In this post, I’m going to explain as much as I can in the case of Jacobians of genus one curves, that is to say, the case of elliptic curves. Of course there are about a zillion books on the classical topic of theta functions, and other elliptic functions, in one variable. I’m going to do a few things that I haven’t seen elsewhere though. I’m going to work entirely in the analytic world. You’ll never see a complex conjugate, a Hermitian matrix or, with one exception that I’ll discuss when I get to it, a real or imaginary part. As a result, my constructions will be analytic so, if I get down to compact spaces, I will be able to apply GAGA and conclude that they are algebraic. Also, I’m going to try as hard as possible not to make any arbitrary choices. Finally, of course, I have been thinking about the higher genus case, and I am trying to choose notation that will generalize well. There will probably be a followup post shortly, discussing what changes in the higher genus case. So far, it looks like the things that are different are basically orthogonal to the things that interest me.

For those who haven’t seen -functions before, let me give this advertisement — just as every polynomial is a product of linear factors, every function on an elliptic curve is a product of -functions. If you care about elliptic curves, it should be pretty obvious why you care about -functions.

Let be a free -module of rank , let be a vector space of dimension and let be a linear map whose image is a lattice. If we choose coordinates, of course, , and is a matrix But we’re not going to do that until we can’t help it. This discipline will reward us when we get to higher genus.

One technical note first: the space of maps whose image is a lattice has two connected components. This is because the complex structure on gives an orientation to , and we can ask which of the two orientations on it corresponds to. We’ll fix one component and stay in it. We will call the corresponding orientation on the *standard* *orientation.* In coordinates, this means that we require that the imaginary part of be positive. This is the “one exception” I warned you about above, where I take the imaginary part of something. Observe that the upper half plane is still a *complex* submanifold of , so we haven’t left the complex analytic category.

**Theta functions and Polarization**

We’d like to build an analytic function on whose zeroes are -periodic. (So this will give us a finite set of zeroes in .) So, for in , the function has no zeroes and can be written as for some analytic function . By definition, is called a -function if, for every , the function is an affine linear function, that is to say, is of the form . The pair is called the *holonomy factor* of . Naturally speaking, is a -valued function on and is a -valued function on . (Here is the dual vector space to .)

The most basic -functions are the trivial -functions, which are of the form (in coordinates) for some constants , and . These have no zeroes, and we will eventually see that they are the only -functions with no zeroes. Note for future reference that, for a trivial -function, is a linear function of .

What properties must obey? Computing in two ways, we see that

Thinking a little, we have

and

.

So the first equation simply states that is a linear map. As a corollary of the second equation, is an integer-valued, skew symmetric form on . This form is called the *polarization*. (Trivial -functions have polarization zero.) Let be the skew symmetric form on which assigns to any basis which respects the standard orientation of and let

.

Then is the number of zeroes inside a fundamental domain for . (This is a very nice exercise. Hint: integrate around the boundary of a fundamental parallelogram.) In particular, if is negative, there are no non-zero functions. From now on, we will focus on the case , which is called *principal polarization*.

**Some basic facts**

**Fact 1:** There is a linear map such that . If is one such linear map, then any other such map is of the form where is a linear map . Given a function with holonomy , one can always divide it by the trivial -function to obtain a -function with holonomy for some

Thus, if we are only interested in the zeroes of , we may always choose one solution and rewrite everything in terms of it. Many classical references, especially for the higher genus case, make the choice . This is usually described as “the unique Hermitian form on whose imaginary part is .” Although natural, this choice is not complex analytic, so we will not make it. In fact, we will not make any choice for .

In higher genus, may not exist; the claim that it does is part of Riemmann’s period relations. Also, in higher genus, the map must be self-adjoint; this is trivial in the one dimensional case.

**Fact 2:** For any map as in Fact 1, there is a function obeying . If is one such function, all other such functions are obtained, modulo , by adding a linear function to . Given a -function for , we can form a -function for by replacing by for some and . We can take if and only if is pulled back from .

We consider the case to be “boring”, as it doesn’t change the zero locus of . Thus the options for , modulo “boring modifications”, are a principal homogeneous space for

.

This is called the dual torus to . In the principally polarized case, and also in the genus one case, this can be identified with , but not in general.

**Fact 3: **Remember that we are in the principally polarized case. For any obeying the required equations, there is a unique nonzero function up to scaling. (More generally, if the polarization is , then there is a -dimensional space of functions.)

One proves Fact 3 by constructing a particular -function. The details of this construction matter because it is *that particular function* whose zeroes are supposed to be the -divisor. So, let’s explain how this is done.

Using Facts 1 and 2, we may reduce to our favorite choice of . Here is our favorite choice: choose a primitive element in and require that . This can be shown to fix uniquely. In coordinates, if , then . The favorite choice of then involves picking a second element such that is an oriented basis for and normalizing and . These determine all of the other values.

Since and are zero, is periodic in the direction and, thus, for some . (Of course, I’m using coordinates and one-dimensionality of to cheat a bit. The expression really means , where is the functional on such that .) Writing down the functional equation for translation by , we deduce the standard formula

.

Well, fairly standard. Plug in and to get the truly classical formula. Note that we used our assumption on orientations to make sure the sum converged.

Fact 3 has very interesting consequences. If we made a different choice of and , we’d get a different function . Then Facts 1, 2 and 3 tells us there would be some sort of relation

This gets into the fascinating topic of modular forms, which is not where I want to go today.

**How do the zeroes of** ** transform?**

In the previous formula, I am only interested in how the zeroes of and relate, so I only care about . In the higher genus case, the fact that changing our basis just translates the zero locus of is interesting, but for genus one, it is vacuous. Can’t we say anything more about ?

I tried to compute from the definition, or by looking in references, and got horribly stuck. I think, though, that I have a way around this by looking at how a change of basis effects . By Fact 3, determines up to a constant, so this should at least in principle be doable. Be warned that everything from this point on is due to me (although I’m sure other people have thought of this too) and may contain errors.

Wow, this post is long! If you want to stretch your legs, this is a good point.

**The conclusion**

The first thing we need to do is get rid of the confusing effect of trivial -functions. Given , let . Multiplying by a trivial -function modifies by a linear function of . Plugging into the functional equation of ,

(I urge you to check the ‘s carefully, this whole argument depends on factors of .) Now, if I worked it out correctly, with the standard choice of coordinates above, . (Note how sneaky it is; if you just look at the special cases or you miss it!) That means that, for any choice of and , the resulting lies in . Let’s say that is *elegant *if lies in . So our choice of a basis for can not give rise to an arbitrary principally polarized -function, but only to an elegant one. Moreover, up to the symmetry of multiplying by trivial -functions, it’s not too hard to show that an elegant function is determined by the function modulo .

How many such functions are there? If is one such function, and another, then is a linear function of . So the ‘s form a principal homogenous space for . Explicitly the four possible functions are , , and . As we change bases of , we only see these four ‘s (modulo ) and thus only see four different zeroes for our functions. Explicitly (if I didn’t screw up), if our basis is , , the resulting function is .

Take the space of orientation preserving maps , and quotient by acting on and acting on . We get the -line. Everything we did was analytic, so we get that there is an analytic correspondence which, given , creates points on the elliptic curve with -invariant . Thinking a little harder about analysis (because the -line is not compact) and a little harder about stacky issues, GAGA let’s us deduce that this correspondence is algebraic.

**(Anti)-climax**

Come to think of it, there is a much easier way to algebraically get -points on an elliptic curve: just take the -torsion. The reason I wrote this out, though, is because things will be more interesting with other principally polarized abelian varieties (like Jacobians). I’m still checking details, but I think I’ll get that there is an algebraic construction of a -tuple of hypersurfaces in any principally polarized abelian variety. This should be the -tuple of -divisors that Jordan Ellenberg promised me.

Also, I have a misgiving! One of the -torsion points of an elliptic curve is special, namely, the origin. I should be able to see that from my argument. Indeed, I can. Only three of the four ‘s actually show up. If is in , then can not be . If it were, then and would both be odd, meaning that , , and were all odd. Then the determinant of is even, a contradiction!

Well, that’s OK then. But I wonder if something similar happens for higher . Do all of the choices of -divisor actually show up for some choice of basis of ?

**References:** I referred frequently to Mumford’s *Lectures on Theta* and to the first chapter of Hindry and Silverman’s *Diophantine Geometry* in preparing this post.

Kinda nicely written actually. :)

minor typos:

in section `Theta functions and polarization,’ paragraph three:

In the equation after `we see that,’ there should not be omega’s in front of the z’s. In the equation after that some minus signs are missing.