When I first tried to read Griffiths and Harrris’s *Principles of Algebraic Geometry*, I was baffled by formulas like . The absolute value function wasn’t analytic, so its derivative with respect to wasn’t** **defined. And what were all these ‘s I was seeing? What were they, and why didn’t they seem to be equal to ?

Maybe I’m the only person who was confused by this. But, if this stuff bothers you too, then this post is for you.

In algebraic geometry, the most important functions are the analytic functions. (In this post, “analytic” means “complex analytic”.) Indeed, much of the progress in algebraic geometry in the last fifty years has been learning how to study the geometry of algebraic varieties using only the algebraic, and hence analytic, functions on those varieties. This is especially necessary to those who want to prove results over fields other than .

Before learning these ideas, though, one should probably learn how to study smooth functions on complex varieties. In particular, the deRham theory is much nicer if we allow all smooth functions, rather than restricting to just analytic ones. (To get a few hints of why, remember that a bounded analytic functions is constant, and nonzero analytic functions never have compact support.)

So, algebraic geometers have developed a notation which allows them to work with smooth functions that are not analytic. At the same time, analytic functions do play a special role in the theory, so the notation is particularly adapted to work well with analytic functions. This can be confusing to the beginner (it was for me!) because it is easy to memorize results which hold only in the analytic case and try to apply them in the smooth case.

In the rest of this post, I will explain this notation. I will assume you are familiar with differential forms; if you are not, I recommend Terry Tao’s PCM article.

To start out with, suppose that we have a smooth function from to . For example: . Then we can take its differential and get a differential form . When we evaluate on a tangent vector , and at a point , we get a measure of how the function changes between and , for real . For example, with as above, we have . Of course, is a complex valued one form, because is a complex valued function, but we can still think of as measuring change along perfectly ordinary tangent vectors.

We could write as . However, some experience shows that it is better to express one forms in terms of and . What do these symbols mean? Well, and are complex valued functions on , so their differentials are one forms. One can check that their differentials are everywhere linearly independent, so every one form can be written uniquely as a linear combination of and . For example, the above function is just , so . Supposing that I had consisidered . Then .

This illustrates the general principle: **If is an analytic function, then ,** where is the derivative you learned in your first complex analysis course. The function will also be analytic. On the other hand, **if is a smooth, but not analytic function, then will be of the form **. Neither nor will necessarily be analytic.

In general, **when you are working with analytic functions, ****all the rules you learned in single variable calculus work**: the sum rule, the product rule, the chain rule and so forth. On the other hand,** when you are working with smooth but nonanalytic functions, everything works the way you learned in multivariable calculus.** In particular, this explains my confusion above about why isn’t ; it’s the same reason that, writing and for the coordinates on , the one-form isn’t .

One-forms of the form are called forms, while one-forms of the form are called forms. More generally, if we are working with functions of complex variables, we will have -forms, for . In coordinates, a -form is a form that can be written as a sum of terms of the form a smooth function times

.

More conceptually, a -form is a -form such that

,

for any vectors , , …, .

This seems like a good point to distinguish two concepts which confused me when I was learning this material. A -form is a sum of terms of the form **a smooth function times . A holomorphic -form is a sum of terms of the form an analytic function times . Both of them can intuitively be thought of as “a form which is purely holomorphic”, but they make this concept rigorous in different ways.**

Finally, what is ? **By definition,**

.

Notice that this equation makes sense: , and are all one forms, whose meaning we know. The expressions and denote complex-valued functions of , which are determined by the above equation. **When is analytic, **. But, **when **** is smooth**, you pretty much have to fall back on the definition.

If you are still confused by all this notation, I recommend trying to read a book which uses a lot of it, thinking back frequently to the definitions to make sure everything makes sense. Pretty soon, everything will seem obvious and second nature. At that point, you’ll be ready to confuse everyone else!

Nice explanation. I especially liked your discussion of when single-variable calculus is the right analogy and when multi-variable calculus is the right analogy. I remember being puzzled about this sort of thing when I took complex analysis, sorting out when it helps to think of a complex variable as two real variables and when it leads you astray.

I’m alsways confused about using complex vector fields. Another thing which is correct in this multivariable calculus analogy is that the fields , commute and so you can regard and as independent variables and the computations with them become more trivial. For instance with real you will have . So going to complex, you’ll have .

Applying this rule freaks me out completely and I have to recheck the result in other ways. How can you regard and as independent variables???

Just wanted to note that I took the liberty of cleaning up some of Boris’s LaTeX, and fixed a sign error in the original post.

I’m not sure I have a good intuitive answer to Boris’s question. As I suggested in the post, this is the sort of thing that I just have to deal with by checking things from the definitions a bunch of times, until it becomes so intuitive that it no longer bothers me.

@ Boris

My understanding of the situation is as follows:

The pair of operators (d/dz, d/dzbar) and (d/dx, d/dy) [and their duals] are equivalent as complex valued operators on the space of real valued /complex valued smooth functions on the real plane (modified upto some minor signs and constants). As a result when one is interested in strictly the analyitc properties of these variables they can be easily thought of as independent variables.

An analogous situation in algebra will be what are called polarization identities where if a norm (on a vector space) comes from an inner product then you can recover the inner product from the norm. This is also true for higher dimensional alternating forms and their restriction to the diagonal(though the corresponding formulas are more complicated).

The fundamental problem of defining a derivative in the complex plane is that one can aproach the base point along all different directions. The usual solution is to demand that the derivative limit agrees along any choosen direction (actually any sequence). This is the analytic derivative we all know. But another route is to instead define a derivative as the average of the direction derivatives taken over all directions . This is the partial derivative .

If the function is -differentiable – and of course it is – then the averaging integral reduces to a two-term average of two independent directions. These can be taken to be along the real and imaginary axes and we get $\frac{\partial f}{\partial z}=\frac{1}{2}\left(\frac{\partial f}{\partial x}+\frac{1}{i}\frac{\partial f}{\partial y}\right)$. The definition of is exactly the same but with, as the notation suggests, a conjugation in the denominator of the limit quotient. Reduced to a two-term average we get $\frac{\partial f}{\partial z}=\frac{1}{2}\left(\frac{\partial f}{\partial x}+\frac{1}{-i}\frac{\partial f}{\partial y}\right)$.

What is and ? is a complex coordinate on a tangent space and is its conjugated value. is a complex-linear 1-form and is an antilinear 1-form. and are functionally fully dependent but they are at the very same time linear independent.

All of this confusions has its roots in linear algebra, not in

calculus, so checking, e.g., Huybrechts’ “Algebraic Geometry” may alleviate some of the pain.

If you have (V,I), a real vector space with an almost complex structure, there is an isomorphism

$$ V\otimes_R C \simeq V^{1,0}\oplus V^{0,1}$$

where $V^{1,0}$ is the $+i$ eigenspace and $V^{0,1}$ the

$-i$ eigenspace of $I$.

Then $(V,I)$ and $V^{1,0}$ are isomorphic as complex vector spaces via $X\mapsto X -i I(X)$.

It seems to me that people are essentially confusing $V$ with

$V\otimes C$.

Now apply the above to $V=T_{p,R}$, the real tangent space at a point p. The isomorphism will send $\frac{\partial}{\partial x}$ to

$\frac{\partial}{\partial z} =\frac{\partial}{\partial x} – i \frac{\partial}{\partial y}$, etc.

The complexified tangent space is generated over $C$ by

$\frac{\partial}{\partial z}$ and $\frac{\partial}{\partial \overline{z}}$. Of course, if you’re restricting yourself to real vectors, i.e., $V=V\otimes R\subset V\otimes C$, there is a restriction – the reality condition, but not in general.

If one is looking at the differential, df, of a function

$f: R^2\to R^2$, then (at a point p) we have

$df_p\in Hom (V,C)=V^\vee\otimes C$.

Decomposing it into (1,0) and (0,1) piece – as above – gives exactly $\frac{\partial f}{\partial z}$ and

$\frac{\partial f}{\partial \overline{z}}$.

Of course, if you insist on thinking of f as a function $R^2\to R^2$, you decompose the (real) differential – a $2\times 2$ matrix – into a part which commutes with the complex structure and a part which anticommutes.

BTW, that’s exactly how one usually remembers the Cauchy-Riemann equations – as $\frac{\partial f}{\partial \overline{z}}=0$. You rewrite your function $R^2\to R^2$ in terms of $z$ and $\overline{z}$ and see if there are any $\overline{z}$’s (provided it’s R^2-differentiable).

Maybe I should not have been so lazy and should’ve written the whole formula, that is,

$$df = f_z dz + f_\overline{z}d\overline{z}\in \Gamma (V^\vee\otimes C), V=R^2.$$

If this were a real function, then, of course, $f_z$ and $f_{\overline{z}}$ would have been conjugates of each other, but not otherwise.

One way to think about how to make z and independent variables can be motivated by focusing on the case of (non-complex-analytic) polynomials

.

We can write and to express this as a polynomial in z and :

.

Now we see that f(z) can be expressed as , where is a complex-analytic function of two complex variables:

.

Note how the coupled variables have now been replaced by two fully independent variables z, w. One can then verify that the derivatives of f correspond to the partial derivatives , of F, as they should (and as per David’s rule of thumb that complex differentiation of non-analytic functions is the complex analogue of calculus of two variables).

I don’t know how to follow up Mr. Tao’s comment. Maybe I’ll just say “z is much more than a direction”. If you differentiate with respect to a real variable x that means you are saying $$ f(x, y) $$ increases at rate $$ \partial f / \partial x $$ in the x direction. Since analytic functions are locally conformal, when you differentiate with respect to $$ z $$, you are saying “a small disc around $$ z_0 $$ is dilated and rotated by $$ f'(z_0) $$.

I was hoping to say that real derivatives measure change in a specific direction while complex derivatives measure “twisting” of the corresponding locally conformal maps. The clockwise and counterclockwise “twisting” of complex valued functions are independent.

A purely “anti-analytic” function would be one where locally all the disks switch orientation. Then let $$ f(z) = z + \overline{z} = 2 Re(z) $$. It take the whole complex plane and violently smushes it to the real line. How about $$ f(z) = z + \epsilon \overline{z} $$. Here, perfectly circular discs get smushed into things which are locally “oval” shaped.

So we have continuum of possible images of small discs. Starting from the counterclockwise oval, to counterclockwise ellipses, to a line segment, to clockwise ovals and clockwise ellipses.

If I knew map in the complex plane took local circles to local ellipses, how could I decompose it back into it’s perfectly circular “clockwise” (anti-analytic, $$ d\overline{z} $$) and “counterclockwise” (analytic $$ dz $$) parts?

We have the definitions:

The integrals can be recognized as the evaluation of the +/-1 Fourier coefficients of . Indeed, restricted to the unit circle is

The image of the unit circle is an ellipse (possibly degenerated into a line segment). Since df is a single frequency function (“monochromatic”), the Nyquist sampling theorem tells us that it can be fully reconstructed by sampling it twice in an half-period. Thus, the integrals can be replaced by two-term sums. We get

The dee zee bar notation seems to have been introduced by

Wirtinger in 1927 paper in the Math Annalen vol 97 pages

357-375.

Mohan

I am a little confused. In R^2-space, isn’t every smooth function g(z) always analytic?

Smooth means that all derivatives exist. This neither implies that is complex analytic (consider ) nor even that it is real analytic (consider .)

Oh, I must confuse the “smoothness” with “differetiability”. I thought they mean the same same. At least in 1-D case, smooth function are always differentiable, right? Thanks, David.

I think you’re still confused, in the usual 1-dimensional real case here are the meanings:

Differentiable: You have a derivative.

Smooth: You have a derivative, which in turn has a derivative, which in turn has a derivative, etc.

Analytic: You’re given by a convergent power series.

Analytic is stronger than smooth which is stronger than differentiable.

For complex functions there’s the additional issue of whether you’re talking about the complex derivative or partial derivatives which is what David is discussing.

I agree with you, Noah. Smothness implies differentiability. So what’s wrong with “smooth function are always differentiable”, in 1D case?

Noah, I’d say it’s a little more subtle than that. What you call “smooth” is “infinitely differentiable”. I read “smooth” as a term of art, meaning “has as many derivatives as I’ll happen to need in the coming application”.

Yes, infinitely differentiable satisfies that, but.. well, I see a distinction there.

Just because a function of a complex variable is differentiable when thought of as a function of two variables (the real and complex parts) does not mean that it is complex differentiable. The issue here isn’t your misunderstanding how smooth/differentiable/analytic interact, it’s that you’re getting confused about what it means to be complex differentiable.

Try this wikipedia page for some information on what complex differentiable means: http://en.wikipedia.org/wiki/Cauchy-Riemann_equations

My last comment was written in response to 17. Your statement in 12 (which is just false) is very different from your statement in 17 (which is true for real functions, but possibly confusing when you’re thinking about complex functions).

Let be a function from to itself. Here are several properties could have. I’ll give them longer names than they usually get, in hopes of being especially clear:

Real differentiable: and exist.

Real smooth: exists for all .

Real analytic: Near any point , is locally given by a convergent power series of the form .

Complex differentiable, meaning that exists. Here is allowed to approach zero in any direction in .

Complex smooth: Define the limit , should it exist, to be .* Complex smooth means that and and and so on all exist.

Complex analytic: is locally given by a convergent power series of the form .

TheoremComplex differentiable, complex smooth and complex analytic are equivalent.We have the implications (complex analytic) –> (real analytic) –> (real smooth) –> (real differentiable). The reverse implications do not hold.

In your first post, I wasn’t sure whether you intended analytic to mean (real analytic) or (complex analytic), so I gave counter-examples to both. The confusion which Noah is addressing is that I have seen differentiable used to mean both (real differentiable) and (complex differentiable). I have never seen “complex smooth” used before, I just made it up.

Does that clarify the situation?

* If exists, then also exists and is equal to it. However, I want to be consistent with the rest of this post and define to mean the coefficient of when is written in the , basis. Using that definition, it can happen that exists but does not — namely, whenever is real differentiable but not complex differentiable. So I need a different notation in order to mean the limit .

Noah: You may be faster, but I am more verbose!

John: I though that “smooth” meant “infinitely differentiable” and “sufficiently smooth” meant “as differentiable as necessary”. But I’m very far from an analyst.

Verbose s/b thorough.

Wikipedia agrees with David and I, while MathWorld agrees with John Armstrong. Lang’s textbooks seem to avoid the phrase “smooth function” entirely in favor of phrases synonymous to “sufficiently smooth.” So it seems pretty ambiguous.

I have yet to see a single reference that uses John Armstrong’s definition, and MathWorld notably fails to cite anything. On the side of “smooth = admitting continuous derivatives of all orders,” a cursory sample of my bookshelf yields Milnor’s Topology from the Differentiable Viewpoint, three Riemannian geometry texts (Sakai, do Carmo, and Gallot et al), and Strang’s Computational Science and Engineering. There’s also essentially every reference article on analytic D-modules.

However, some texts seem to make a point of not using the word smooth anywhere, perhaps to avoid potential confusion. Also, I don’t have any hardcore PDE texts.

Evan’s “Partial Differential Equations” uses smooth=infinitely differentiable.

I’m not an analyst either. And I don’t have references at hand at the moment, but it’s not ever a really rigorous definition. That’s part of what “term of art” means.

What I

haveseen (and more than once) is language in prefaces to the effect of “We’re going to say ‘smooth’, because we don’t want to worry about differentiability hypotheses here. If you take this to mean it’ll all work out.” And then they use it to mean what others are calling “sufficiently smooth”.From my reading patterns it’s most likely I’ve run across this in differential geometry books, so I’ll go out on a limb and say that’s probably where I’ve seen it.

My pedantry makes me want to note that I gave the wrong definition of real differentiability. I am pretty sure that a function should be called (real) differentiable at if there are constants and such that

.

This is slightly stronger than assuming that and exist. For example, consider the function

, .

This isn’t relevant to the original discussion though – the worst functions I want to think about are the real smooth functions.

Differentiability (i.e., existence of a good linear approximation) is also slightly weaker than , which demands continuous partial derivatives. The standard example between the two is . The fact that there are three different notions confused a lot of my calculus students last term.

http://www.math.harvard.edu/~ctm/home/text/class/harvard/55b/09/html/home/hw/hw8.pdf

See question 8.

I’m rather curious about why McMullen uses d instead of \partial, but I’d feel weird asking someone who doesn’t know my name.