Universality, or What Does Renormalizable Quantum Field Theory Actually Compute?

Most of the quantum field theory textbooks I’ve read are backwards.

They begin by talking about various classical fields, then they introduce path integrals and develop perturbative scattering theory, then they do a bunch of lowest-order calculations. This works alright, but all hell breaks loose when they try to do higher order computations — divergent integrals everywhere. At this point, the textbook introduces a clever distinction: the “bare” parameters in the Lagrangian are not the physical coupling constants. The physical coupling constants are the ones we measure, and they’re a very complicated combination of the bare parameters. And (here’s the miracle) if we systematically eliminate the bare parameters from our computations in favor of the physical ones, then we find, in renormalizable QFTs, that all those nasty infinities cancel out, leaving us with nice finite answers. (Of course, this only works for renormalizable theories, but for some reason, these are the ones that nature seems to use.)

This is the historical way of doing things, and it’s also the way most computations are done. But conceptually, it’s crazy. What are we actually computing? The story keeps changing!

There is actually a clear answer to this question. Unfortunately most QFT books won’t tell you until after page 300 or so. But come below the fold, and I’ll tell you in only a few paragraphs.

There are two things you have to keep in mind:

First, the basic behavior of quantum systems is expressed by Feynman’s “sum over histories” formalism. Systems evolve every which way they can, and the various possibilities interfere with one another to create the reality we see. Mathematically, we’re interested in computing the moments of the “measure”

\int_{\mathcal{F}} df e^{-S(f)/\hbar}

where \mathcal{F} is some space of classical fields and S: \mathcal{F} \to \mathbb{R} is the classical action.

Second, physical phenomena can be organized scale by scale. If you look at your keyboard, it’s made of plastic. Plastic, if you look up very close, is a jungle of hydrocarbons. These hydrocarbons, if you look even closer, are really just configurations of atoms. And the atoms, if you look closer yet, are made of electrons and nuclei. Electrons don’t seem to have any composite structure, but nuclei are protons and neutrons, which are in turn made of quarks and gluons. We don’t know what quarks and gluons are made of; physicists usually draw pictures of whirlpools and sea dragons at this point.

There’s an interesting tension between these two facts, because the sum over histories seems to mix high and low energy phenomena. When we scatter two low-energy electrons (bantamweight .0005 GeV) off each other, the dominant effect comes from massless photon exchange. The diagrams tend to look like this

(Graphic from wikipedia.) But the photon has a heavy cousin, the ultra-heavyweight 91 GeV Z-boson. Electrons can also exchange these, giving us another higher-energy option in the sum over histories. It’s the same diagram, but now the wavy line should be thought of as a Z-boson.

This looks like a disaster! To compute electron scattering, it looks like we need to know what physics looks like at the 100 GeV scale, five orders of magnitude higher.

But the real world isn’t like that, of course, and the theory reflects it. The contributions of the Z-boson to scattering at some low energy E go like (\frac{E}{91 GeV})^2; the effects of the high energy physics are strongly suppressed at low energy. We don’t need to know about them to study low-energy systems; we can just ignore the Z-bosons. But we should expect our approximation to stop working when E gets up towards the Z-boson mass.

This basic story is repeated nearly everywhere in quantum field theory. We choose a set of fields and a classical Lagrangian that describes their dynamics. When we try to work out the quantum behavior of these systems, we seek guidance from Feynman’s sum over histories principle, and that tells us to integrate over intermediate states of arbitrarily high energy. Doing so produces nonsense, infinities and the like. And this shouldn’t be a surprise. We have no right to expect our theory to work at incredibly high energies; we probably don’t even know all the variables involved. In short, we get infinities because we’re wrongly pretending our theory works at super high energies, ignoring all the other effects that can come into play. Our computations would be better behaved if we included these effects, or at least tried to average them out.

So what we should do, once we’ve noticed that the naive approach doesn’t work, is regularize the path integral, introduce some cut-off energy and choose to ignore physics at higher energy scales. There are lots and lots of ways to do this. One of them actually corresponds to what happens in the real world; our low energy theory is presumably the result of averaging out the high energy effects in some more fundamental theory. We don’t know which regularization this is, but for practical purposes, we don’t care. We want to know the leading order low-energy behavior, and this is independent of the choice of regularization! We can choose any regularization that happens to be convenient. We can pretend spacetime is discrete, and physics at scales much longer than the lattice spacing won’t depend on which lattice we choose. We can introduce fictitious particles that just happen to cancel out all the higher energy effects (the Pauli-Villars regularization). We can pretend that the dimension of spacetime is a complex number z, and simply delete any terms that are badly behaved as z approaches the integer d we care about. It doesn’t matter. As long as we’re looking at energies well below our regularization scale, we see the same effects.

They call this phenomenon universality. Ken Wilson got a Nobel Prize for providing the details.

And this is the answer to our question: when we perform computations in renormalized QFT, what we’re doing is computing the leading order universal behavior of any regularized path integral. (I’ve always wondered if there’s something homological going on here: regularized path integrals are cocycles, and when we do renormalized QFT, we’re taking equivalence classes, passing to cohomology.)

So at this point, you should be wondering “why is the leading order behavior of regularized path integrals described by renormalizable QFTs?” Wilson’s answer is beautifully simple. Start with any QFT you want; the Lagrangian can have renormalizable interactions and non-renormalizable ones, can have infinitely many terms. Pick a regularization scheme, maybe cut-off at some large energy scale \Lambda. We want to study the low-energy behavior of our system, so we should change variables to isolate this behavior.

This change of variables is exactly the renormalization procedure: We choose a way of measuring our theory’s physical coupling constants, usually by scattering particles off each other at some low energy \mu. Then we re-express all of our computations in terms of these chosen physical coupling constants. Some of the coupling constants will correspond to renormalizable interactions, and some of them won’t.

But here’s the beautiful thing: The physical coupling constants depend on \mu, and when we study the limit where \mu is very small (equivalently, when we study the limit where \Lambda is very large), what we find is that all of the non-renormalizable interactions go to zero. Only the renormalizable interactions are non-zero in the low-energy variables. And the approximation remains valid as long as we only study physical effects where the characteristic energy scale E is much less than \Lambda.

That’s why particle physics is described, at low-energies, by renormalizable theories. They’re what you get, no matter what you start with.

5 thoughts on “Universality, or What Does Renormalizable Quantum Field Theory Actually Compute?

  1. Well, someone had to say it. The big thing that bugs me about QFT texts is that I keep losing track of “what the hell is the point?” And that’s very much connected to the point you make at the top.

  2. Yeah, it’s a bit insane that most books take so long to tell you what it is you’re computing.
    There are really three barriers for mathematicians trying to learn QFT:
    The first is that the concepts are presented in somewhat ass-backwards fashion. I don’t really know of any good reason for this…maybe it’s just inertia? The second is that the textbooks usually spend a lot of time introducing classical fermionic and gauge fields. The last is that QFT is usually presented in the context of particle physics, which uses the language of scattering theory. I sometimes think mathematicians would have more luck if they tried reading statistical physics books like di Francesco et al.

  3. So, AJ, is there a definition (suitable for mathematicians) of what a reasonable regularization should be? I’m pretty sure I can write down some crazy choices of cutoff which give the wrong answers.

    I haven’t actually tried yet, but I think I know how. I just finished reading the section of Zee on renormalization, which he does by cutting off the high frequency terms in his integrals. If I break rotational symmetry in the frequency space and take the cutoff region to be a large ellipsoid rather than a sphere, I think I will get finite, but wrong, answers.

  4. Hey David,

    I think the short answer is “no”.

    The longer answer is that we would prefer to pick regularizations compatible with the symmetries of the Lagrangian. Otherwise, it’s not clear that we recover these symmetries in the limit where we remove the regularization. So your example might be troublesome if you were expecting to have rotational invariance in the low-energy theory. But it’d probably be fine if you were studying electron motion in a crystalline solid with an uneven lattice structure.

    Unfortunately, none of this is absolute. Lattice gauge theories seem to recover Lorentz invariance when we shrink the lattice. And conversely, some Lagrangians have symmetries for which no compatible regularization exists.

  5. You have a large, infinite dimensional space of cutoffs, and for a fixed Lagrangian, your answer will depend on the cutoff. However, there is also a large space of possible assignments of counterterms (essentially adjustments to your measurable values), especially if you throw away basic symmetries. I think one mathematical interpretation is that there is a group of renormalizations that acts transitively on the cutoffs and the counterterms, such that for renormalizable theories, the results of calculations are fixed.

    From a physical perspective, the terms in the Lagrangian *should* depend on your cutoff, since the nature of your probe affects the values you would measure (see AJ’s example with Z above). Wilson gave an explanation for the fact that this dependence has the property that the result of your calculations will not depend on the cutoff.

Comments are closed.