What´s up with dee zee bar?

When I first tried to read Griffiths and Harrris’s Principles of Algebraic Geometry, I was baffled by formulas like (\partial/\partial z)|z| = (1/2) \bar{z}/|z|. The absolute value function wasn’t analytic, so its derivative with respect to z wasn’t defined. And what were all these d \bar{z}s I was seeing? What were they, and why didn’t they seem to be equal to (\partial \bar{z}/\partial z) dz?

Maybe I’m the only person who was confused by this. But, if this stuff bothers you too, then this post is for you.

In algebraic geometry, the most important functions are the analytic functions. (In this post, “analytic” means “complex analytic”.) Indeed, much of the progress in algebraic geometry in the last fifty years has been learning how to study the geometry of algebraic varieties using only the algebraic, and hence analytic, functions on those varieties. This is especially necessary to those who want to prove results over fields other than \mathbb{C}.

Before learning these ideas, though, one should probably learn how to study smooth functions on complex varieties. In particular, the deRham theory is much nicer if we allow all smooth functions, rather than restricting to just analytic ones. (To get a few hints of why, remember that a bounded analytic functions is constant, and nonzero analytic functions never have compact support.)

So, algebraic geometers have developed a notation which allows them to work with smooth functions that are not analytic. At the same time, analytic functions do play a special role in the theory, so the notation is particularly adapted to work well with analytic functions. This can be confusing to the beginner (it was for me!) because it is easy to memorize results which hold only in the analytic case and try to apply them in the smooth case.

In the rest of this post, I will explain this notation. I will assume you are familiar with differential forms; if you are not, I recommend Terry Tao’s PCM article.

To start out with, suppose that we have a smooth function from \mathbb{C} to \mathbb{C}. For example: f(x+iy) = (x^2-y^2) + 2xyi. Then we can take its differential and get a differential form df. When we evaluate df on a tangent vector v, and at a point z, we get a measure of how the function f changes between z and z+ \epsilon v, for real \epsilon. For example, with f as above, we have df = (2x+2yi) dx + (-2 y+2xi) dy. Of course, df is a complex valued one form, because f is a complex valued function, but we can still think of df as measuring change along perfectly  ordinary tangent vectors.

We could write df as a(x,y) dx + b(x,y) dy. However, some experience shows that it is better to express one forms in terms of dz and d \bar{z}. What do these symbols mean? Well, z and \bar{z} are complex valued functions on \mathbb{C}, so their differentials are one forms. One can check that their differentials are everywhere linearly independent, so every one form can be written uniquely as a linear combination of dz and d \bar{z}. For example, the above function f is just z^2, so df=2z dz. Supposing that I had consisidered g = z \bar{z} = x^2 + y^2. Then dg=\bar{z} dz + z d \bar{z}.

This illustrates the general principle: If f is an analytic function, then df=(df/dz) dz, where df/dz is the derivative you learned in your first complex analysis course. The function df/dz will also be analytic. On the other hand, if g is a smooth, but not analytic function, then dg will be of the form a dz + b d \bar{z}. Neither a nor b will necessarily be analytic.

In general, when you are working with analytic functions, all the rules you learned in single variable calculus work: the sum rule, the product rule, the chain rule and so forth. On the other hand, when you are working with smooth but nonanalytic functions, everything works the way you learned in multivariable calculus.  In particular, this explains my confusion above about why d \bar{z} isn’t (\partial \bar{z}/\partial z) dz; it’s the same reason that, writing x and y for the coordinates on \mathbb{R}^2, the one-form dy isn’t (\partial y/\partial x) dx.

One-forms of the form a dz are called (1,0) forms, while one-forms of the form b d \bar{z} are called (0,1) forms. More generally, if we are working with functions of n complex variables, we will have (p,q)-forms, for 0 \leq p, q \leq n. In coordinates, a (p,q)-form is a form that can be written as a sum of terms of the form a smooth function times

dz_{i_1} \wedge \cdots \wedge dz_{i_p} \wedge d\bar{z}_{j_1} \wedge \cdots \wedge d \bar{z}_{j_q}

More conceptually, a (p,q)-form is a (p+q)-form \eta such that

\eta(e^{i \theta} v_1, e^{i \theta} v_2, \ldots, e^{i \theta} v_{p+q}) = e^{i (p-q) \theta} \eta(v_1, v_2, \ldots, v_{p+q}),

for any p+q vectors v_1, v_2, …, v_{p+q}

This seems like a good point to distinguish two concepts which confused me when I was learning this material. A (p,0)-form is a sum of terms of the form a smooth function times dz_{i_1} \wedge \cdots \wedge dz_{i_p}. A holomorphic p-form is a sum of terms of the form an analytic function times dz_{i_1} \wedge \cdots \wedge dz_{i_p}. Both of them can intuitively be thought of as “a form which is purely holomorphic”, but they make this concept rigorous in different ways.

Finally, what is \partial/\partial z? By definition,

df = (\partial f/\partial z) dz + (\partial f/\partial \bar{z}) d \bar{z}.

Notice that this equation makes sense: df, dz and d \bar{z} are all one forms, whose meaning we know. The expressions  \partial f/\partial z and \partial f/\partial \bar{z} denote complex-valued  functions of z, which are determined by the above equation.  When f is analytic, \partial f/\partial z = \lim_{h \to 0} (f(z+h) - f(z))/h. But, when f is smooth, you pretty much have to fall back on the definition.

If you are still confused by all this notation, I recommend trying to read a book which uses a lot of it, thinking back frequently to the definitions to make sure everything makes sense. Pretty soon, everything will seem obvious and second nature. At that point, you’ll be ready to confuse everyone else!

32 thoughts on “What´s up with dee zee bar?

  1. Nice explanation. I especially liked your discussion of when single-variable calculus is the right analogy and when multi-variable calculus is the right analogy. I remember being puzzled about this sort of thing when I took complex analysis, sorting out when it helps to think of a complex variable as two real variables and when it leads you astray.

  2. I’m alsways confused about using complex vector fields. Another thing which is correct in this multivariable calculus analogy is that the fields \frac{\partial}{\partial z}, \frac{\partial}{\partial \overline{z}} commute and so you can regard z and \overline{z} as independent variables and the computations with them become more trivial. For instance with real x,y you will have \frac{\partial}{\partial y}(f(x)g(y))=f(x)\frac{\partial}{\partial y}g(y). So going to complex, you’ll have \frac{\partial}{\partial z}(\overline{z}z)=\overline{z}\frac{\partial}{\partial z}z=\overline{z}.
    Applying this rule freaks me out completely and I have to recheck the result in other ways. How can you regard z and \overline{z} as independent variables???

  3. Just wanted to note that I took the liberty of cleaning up some of Boris’s LaTeX, and fixed a sign error in the original post.

    I’m not sure I have a good intuitive answer to Boris’s question. As I suggested in the post, this is the sort of thing that I just have to deal with by checking things from the definitions a bunch of times, until it becomes so intuitive that it no longer bothers me.

  4. @ Boris
    My understanding of the situation is as follows:
    The pair of operators (d/dz, d/dzbar) and (d/dx, d/dy) [and their duals] are equivalent as complex valued operators on the space of real valued /complex valued smooth functions on the real plane (modified upto some minor signs and constants). As a result when one is interested in strictly the analyitc properties of these variables they can be easily thought of as independent variables.
    An analogous situation in algebra will be what are called polarization identities where if a norm (on a vector space) comes from an inner product then you can recover the inner product from the norm. This is also true for higher dimensional alternating forms and their restriction to the diagonal(though the corresponding formulas are more complicated).

  5. The fundamental problem of defining a derivative in the complex plane is that one can aproach the base point along all different directions. The usual solution is to demand that the derivative limit agrees along any choosen direction (actually any sequence). This is the analytic derivative d/dz we all know. But another route is to instead define a derivative as the average of the direction derivatives taken over all directions (0, \pi). This is the partial derivative \partial /\partial z.

    If the function is R^2-differentiable – and of course it is – then the averaging integral reduces to a two-term average of two independent directions. These can be taken to be along the real and imaginary axes and we get $\frac{\partial f}{\partial z}=\frac{1}{2}\left(\frac{\partial f}{\partial x}+\frac{1}{i}\frac{\partial f}{\partial y}\right)$. The definition of \partial /\partial\bar{z} is exactly the same but with, as the notation suggests, a conjugation in the denominator of the limit quotient. Reduced to a two-term average we get $\frac{\partial f}{\partial z}=\frac{1}{2}\left(\frac{\partial f}{\partial x}+\frac{1}{-i}\frac{\partial f}{\partial y}\right)$.

    What is dz and d\bar{z}? dz is a complex coordinate on a tangent space and d\bar{z} is its conjugated value. dz is a complex-linear 1-form and d\bar{z} is an antilinear 1-form. dz and d\bar{z} are functionally fully dependent but they are at the very same time linear independent.

  6. All of this confusions has its roots in linear algebra, not in
    calculus, so checking, e.g., Huybrechts’ “Algebraic Geometry” may alleviate some of the pain.
    If you have (V,I), a real vector space with an almost complex structure, there is an isomorphism
    $$ V\otimes_R C \simeq V^{1,0}\oplus V^{0,1}$$
    where $V^{1,0}$ is the $+i$ eigenspace and $V^{0,1}$ the
    $-i$ eigenspace of $I$.
    Then $(V,I)$ and $V^{1,0}$ are isomorphic as complex vector spaces via $X\mapsto X -i I(X)$.
    It seems to me that people are essentially confusing $V$ with
    $V\otimes C$.
    Now apply the above to $V=T_{p,R}$, the real tangent space at a point p. The isomorphism will send $\frac{\partial}{\partial x}$ to
    $\frac{\partial}{\partial z} =\frac{\partial}{\partial x} – i \frac{\partial}{\partial y}$, etc.
    The complexified tangent space is generated over $C$ by
    $\frac{\partial}{\partial z}$ and $\frac{\partial}{\partial \overline{z}}$. Of course, if you’re restricting yourself to real vectors, i.e., $V=V\otimes R\subset V\otimes C$, there is a restriction – the reality condition, but not in general.
    If one is looking at the differential, df, of a function
    $f: R^2\to R^2$, then (at a point p) we have
    $df_p\in Hom (V,C)=V^\vee\otimes C$.
    Decomposing it into (1,0) and (0,1) piece – as above – gives exactly $\frac{\partial f}{\partial z}$ and
    $\frac{\partial f}{\partial \overline{z}}$.
    Of course, if you insist on thinking of f as a function $R^2\to R^2$, you decompose the (real) differential – a $2\times 2$ matrix – into a part which commutes with the complex structure and a part which anticommutes.
    BTW, that’s exactly how one usually remembers the Cauchy-Riemann equations – as $\frac{\partial f}{\partial \overline{z}}=0$. You rewrite your function $R^2\to R^2$ in terms of $z$ and $\overline{z}$ and see if there are any $\overline{z}$’s (provided it’s R^2-differentiable).

  7. Maybe I should not have been so lazy and should’ve written the whole formula, that is,
    $$df = f_z dz + f_\overline{z}d\overline{z}\in \Gamma (V^\vee\otimes C), V=R^2.$$
    If this were a real function, then, of course, $f_z$ and $f_{\overline{z}}$ would have been conjugates of each other, but not otherwise.

  8. One way to think about how to make z and \overline{z} independent variables can be motivated by focusing on the case of (non-complex-analytic) polynomials

    f(x+iy) = \sum_{j=0}^d \sum_{k=0}^d c_{j,k} x^j y^k.

    We can write x = \frac{z+\overline{z}}{2} and y = \frac{z-\overline{z}}{2i} to express this as a polynomial in z and \overline{z}:

    f(z) = \sum_{j=0}^d \sum_{k=0}^d c_{j,k} (\frac{z+\overline{z}}{2})^j (\frac{z-\overline{z}}{2i})^k.

    Now we see that f(z) can be expressed as F(z,\overline{z}), where F(z,w) is a complex-analytic function of two complex variables:

    F(z,w) = \sum_{j=0}^d \sum_{k=0}^d c_{j,k} (\frac{z+w}{2})^j (\frac{z-w}{2i})^k.

    Note how the coupled variables z, \overline{z} have now been replaced by two fully independent variables z, w. One can then verify that the derivatives \partial_z f, \partial_{\overline{z}} f of f correspond to the partial derivatives \partial_z F, \partial_w F of F, as they should (and as per David’s rule of thumb that complex differentiation of non-analytic functions is the complex analogue of calculus of two variables).

  9. I don’t know how to follow up Mr. Tao’s comment. Maybe I’ll just say “z is much more than a direction”. If you differentiate with respect to a real variable x that means you are saying $$ f(x, y) $$ increases at rate $$ \partial f / \partial x $$ in the x direction. Since analytic functions are locally conformal, when you differentiate with respect to $$ z $$, you are saying “a small disc around $$ z_0 $$ is dilated and rotated by $$ f'(z_0) $$.

    I was hoping to say that real derivatives measure change in a specific direction while complex derivatives measure “twisting” of the corresponding locally conformal maps. The clockwise and counterclockwise “twisting” of complex valued functions are independent.

    A purely “anti-analytic” function would be one where locally all the disks switch orientation. Then let $$ f(z) = z + \overline{z} = 2 Re(z) $$. It take the whole complex plane and violently smushes it to the real line. How about $$ f(z) = z + \epsilon \overline{z} $$. Here, perfectly circular discs get smushed into things which are locally “oval” shaped.

    So we have continuum of possible images of small discs. Starting from the counterclockwise oval, to counterclockwise ellipses, to a line segment, to clockwise ovals and clockwise ellipses.

    If I knew map in the complex plane took local circles to local ellipses, how could I decompose it back into it’s perfectly circular “clockwise” (anti-analytic, $$ d\overline{z} $$) and “counterclockwise” (analytic $$ dz $$) parts?

  10. We have the definitions:

    \frac{df}{dz} = \lim_{h\rightarrow 0}\frac{f(z+h)-f(z)}{h}

    \frac{df}{d\bar{z}} = \lim_{h\rightarrow 0}\frac{f(z+h)-f(z)}{\bar{h}}

    \frac{\partial f}{\partial z} = \frac{1}{\pi}\int_0^\pi\lim_{r\rightarrow 0}\frac{f(z+re^{i\theta})-f(z)}{re^{i\theta}}\,d\theta

    \frac{\partial f}{\partial\bar{z}} = \frac{1}{\pi}\int_0^\pi\lim_{r\rightarrow 0}\frac{f(z+re^{i\theta})-f(z)}{re^{-i\theta}}\,d\theta

    The integrals can be recognized as the evaluation of the +/-1 Fourier coefficients of df. Indeed, df restricted to the unit circle is

    df(e^{i\theta}) = \frac{\partial f}{\partial z}\,e^{i\theta} + \frac{\partial f}{\partial\bar{z}}\,e^{-i\theta}

    The image of the unit circle is an ellipse (possibly degenerated into a line segment). Since df is a single frequency function (“monochromatic”), the Nyquist sampling theorem tells us that it can be fully reconstructed by sampling it twice in an half-period. Thus, the integrals can be replaced by two-term sums. We get

    \frac{\partial f}{\partial z} = \frac{1}{2}\sum_{\theta\in\{0,\pi/2\}}\lim_{r\rightarrow 0}\frac{f(z+re^{i\theta})-f(z)}{re^{i\theta}} = \frac{1}{2}\left(\frac{\partial f}{\partial x}+\frac{1}{i}\frac{\partial f}{\partial y}\right)

    \frac{\partial f}{\partial\bar{z}} = \frac{1}{2}\sum_{\theta\in\{0,\pi/2\}}\lim_{r\rightarrow 0}\frac{f(z+re^{i\theta})-f(z)}{re^{-i\theta}} = \frac{1}{2}\left(\frac{\partial f}{\partial x}+\frac{1}{-i}\frac{\partial f}{\partial y}\right)

  11. The dee zee bar notation seems to have been introduced by
    Wirtinger in 1927 paper in the Math Annalen vol 97 pages
    357-375.
    Mohan

  12. Oh, I must confuse the “smoothness” with “differetiability”. I thought they mean the same same. At least in 1-D case, smooth function are always differentiable, right? Thanks, David.

  13. I think you’re still confused, in the usual 1-dimensional real case here are the meanings:

    Differentiable: You have a derivative.
    Smooth: You have a derivative, which in turn has a derivative, which in turn has a derivative, etc.
    Analytic: You’re given by a convergent power series.

    Analytic is stronger than smooth which is stronger than differentiable.

    For complex functions there’s the additional issue of whether you’re talking about the complex derivative or partial derivatives which is what David is discussing.

  14. I agree with you, Noah. Smothness implies differentiability. So what’s wrong with “smooth function are always differentiable”, in 1D case?

  15. Noah, I’d say it’s a little more subtle than that. What you call “smooth” is “infinitely differentiable”. I read “smooth” as a term of art, meaning “has as many derivatives as I’ll happen to need in the coming application”.

    Yes, infinitely differentiable satisfies that, but.. well, I see a distinction there.

  16. Just because a function of a complex variable is differentiable when thought of as a function of two variables (the real and complex parts) does not mean that it is complex differentiable. The issue here isn’t your misunderstanding how smooth/differentiable/analytic interact, it’s that you’re getting confused about what it means to be complex differentiable.

    Try this wikipedia page for some information on what complex differentiable means: http://en.wikipedia.org/wiki/Cauchy-Riemann_equations

  17. My last comment was written in response to 17. Your statement in 12 (which is just false) is very different from your statement in 17 (which is true for real functions, but possibly confusing when you’re thinking about complex functions).

  18. Let f be a function from \mathbb{C} to itself. Here are several properties f could have. I’ll give them longer names than they usually get, in hopes of being especially clear:

    Real differentiable: \partial f/\partial x and \partial f/\partial y exist.

    Real smooth: \partial^{i+j} f/(\partial x)^i (\partial y)^j exists for all (i,j).

    Real analytic: Near any point x_0+i y_0, f(x+iy) is locally given by a convergent power series of the form \sum a_{jk} (x-x_0)^j (y-y_0)^k.

    Complex differentiable, meaning that \lim_{h \to 0} (f(z+h)-f(z))/h exists. Here h is allowed to approach zero in any direction in \mathbb{C}.

    Complex smooth: Define the limit \lim_{h \to 0} (f(z+h)-f(z))/h, should it exist, to be f'(z).* Complex smooth means that f'(x) and f''(z) and f'''(z) and so on all exist.

    Complex analytic: f is locally given by a convergent power series of the form \sum a_n z^n.

    Theorem Complex differentiable, complex smooth and complex analytic are equivalent.

    We have the implications (complex analytic) –> (real analytic) –> (real smooth) –> (real differentiable). The reverse implications do not hold.

    In your first post, I wasn’t sure whether you intended analytic to mean (real analytic) or (complex analytic), so I gave counter-examples to both. The confusion which Noah is addressing is that I have seen differentiable used to mean both (real differentiable) and (complex differentiable). I have never seen “complex smooth” used before, I just made it up.

    Does that clarify the situation?

    * If f'(z) exists, then \partial f/\partial z also exists and is equal to it. However, I want to be consistent with the rest of this post and define \partial f/\partial z to mean the coefficient of dz when df is written in the dz, d \bar{z} basis. Using that definition, it can happen that \partial f/\partial z exists but f'(z) does not — namely, whenever f is real differentiable but not complex differentiable. So I need a different notation in order to mean the limit (f(z+h)-f(z))/h.

  19. Noah: You may be faster, but I am more verbose!

    John: I though that “smooth” meant “infinitely differentiable” and “sufficiently smooth” meant “as differentiable as necessary”. But I’m very far from an analyst.

  20. Verbose s/b thorough.

    Wikipedia agrees with David and I, while MathWorld agrees with John Armstrong. Lang’s textbooks seem to avoid the phrase “smooth function” entirely in favor of phrases synonymous to “sufficiently smooth.” So it seems pretty ambiguous.

  21. I have yet to see a single reference that uses John Armstrong’s definition, and MathWorld notably fails to cite anything. On the side of “smooth = admitting continuous derivatives of all orders,” a cursory sample of my bookshelf yields Milnor’s Topology from the Differentiable Viewpoint, three Riemannian geometry texts (Sakai, do Carmo, and Gallot et al), and Strang’s Computational Science and Engineering. There’s also essentially every reference article on analytic D-modules.

    However, some texts seem to make a point of not using the word smooth anywhere, perhaps to avoid potential confusion. Also, I don’t have any hardcore PDE texts.

  22. I’m not an analyst either. And I don’t have references at hand at the moment, but it’s not ever a really rigorous definition. That’s part of what “term of art” means.

    What I have seen (and more than once) is language in prefaces to the effect of “We’re going to say ‘smooth’, because we don’t want to worry about differentiability hypotheses here. If you take this to mean C^\infty it’ll all work out.” And then they use it to mean what others are calling “sufficiently smooth”.

    From my reading patterns it’s most likely I’ve run across this in differential geometry books, so I’ll go out on a limb and say that’s probably where I’ve seen it.

  23. My pedantry makes me want to note that I gave the wrong definition of real differentiability. I am pretty sure that a function g: \mathbb{R}^2 \to \mathbb{R} should be called (real) differentiable at (0,0) if there are constants a and b such that

    g(x,y) = g(0,0) + a x + b y + o(|x|+|y|).

    This is slightly stronger than assuming that \lim_{h \to 0} (g(h,0)-g(0,0))/h and \lim_{h \to 0} (g(0, h) - g(0, 0))/h exist. For example, consider the function

    g(x,y) = xy(x+y)/(x^2+y^2), g(0,0)=0.

    This isn’t relevant to the original discussion though – the worst functions I want to think about are the real smooth functions.

  24. Differentiability (i.e., existence of a good linear approximation) is also slightly weaker than C^1, which demands continuous partial derivatives. The standard example between the two is x^2 \cos (1/x). The fact that there are three different notions confused a lot of my calculus students last term.

  25. I’m rather curious about why McMullen uses d instead of \partial, but I’d feel weird asking someone who doesn’t know my name.

Comments are closed.