Most of the quantum field theory textbooks I’ve read are backwards.
They begin by talking about various classical fields, then they introduce path integrals and develop perturbative scattering theory, then they do a bunch of lowest-order calculations. This works alright, but all hell breaks loose when they try to do higher order computations — divergent integrals everywhere. At this point, the textbook introduces a clever distinction: the “bare” parameters in the Lagrangian are not the physical coupling constants. The physical coupling constants are the ones we measure, and they’re a very complicated combination of the bare parameters. And (here’s the miracle) if we systematically eliminate the bare parameters from our computations in favor of the physical ones, then we find, in renormalizable QFTs, that all those nasty infinities cancel out, leaving us with nice finite answers. (Of course, this only works for renormalizable theories, but for some reason, these are the ones that nature seems to use.)
This is the historical way of doing things, and it’s also the way most computations are done. But conceptually, it’s crazy. What are we actually computing? The story keeps changing!
There is actually a clear answer to this question. Unfortunately most QFT books won’t tell you until after page 300 or so. But come below the fold, and I’ll tell you in only a few paragraphs.
There are two things you have to keep in mind:
First, the basic behavior of quantum systems is expressed by Feynman’s “sum over histories” formalism. Systems evolve every which way they can, and the various possibilities interfere with one another to create the reality we see. Mathematically, we’re interested in computing the moments of the “measure”
where is some space of classical fields and is the classical action.
Second, physical phenomena can be organized scale by scale. If you look at your keyboard, it’s made of plastic. Plastic, if you look up very close, is a jungle of hydrocarbons. These hydrocarbons, if you look even closer, are really just configurations of atoms. And the atoms, if you look closer yet, are made of electrons and nuclei. Electrons don’t seem to have any composite structure, but nuclei are protons and neutrons, which are in turn made of quarks and gluons. We don’t know what quarks and gluons are made of; physicists usually draw pictures of whirlpools and sea dragons at this point.
There’s an interesting tension between these two facts, because the sum over histories seems to mix high and low energy phenomena. When we scatter two low-energy electrons (bantamweight .0005 GeV) off each other, the dominant effect comes from massless photon exchange. The diagrams tend to look like this
(Graphic from wikipedia.) But the photon has a heavy cousin, the ultra-heavyweight 91 GeV Z-boson. Electrons can also exchange these, giving us another higher-energy option in the sum over histories. It’s the same diagram, but now the wavy line should be thought of as a Z-boson.
This looks like a disaster! To compute electron scattering, it looks like we need to know what physics looks like at the 100 GeV scale, five orders of magnitude higher.
But the real world isn’t like that, of course, and the theory reflects it. The contributions of the Z-boson to scattering at some low energy go like ; the effects of the high energy physics are strongly suppressed at low energy. We don’t need to know about them to study low-energy systems; we can just ignore the Z-bosons. But we should expect our approximation to stop working when gets up towards the Z-boson mass.
This basic story is repeated nearly everywhere in quantum field theory. We choose a set of fields and a classical Lagrangian that describes their dynamics. When we try to work out the quantum behavior of these systems, we seek guidance from Feynman’s sum over histories principle, and that tells us to integrate over intermediate states of arbitrarily high energy. Doing so produces nonsense, infinities and the like. And this shouldn’t be a surprise. We have no right to expect our theory to work at incredibly high energies; we probably don’t even know all the variables involved. In short, we get infinities because we’re wrongly pretending our theory works at super high energies, ignoring all the other effects that can come into play. Our computations would be better behaved if we included these effects, or at least tried to average them out.
So what we should do, once we’ve noticed that the naive approach doesn’t work, is regularize the path integral, introduce some cut-off energy and choose to ignore physics at higher energy scales. There are lots and lots of ways to do this. One of them actually corresponds to what happens in the real world; our low energy theory is presumably the result of averaging out the high energy effects in some more fundamental theory. We don’t know which regularization this is, but for practical purposes, we don’t care. We want to know the leading order low-energy behavior, and this is independent of the choice of regularization! We can choose any regularization that happens to be convenient. We can pretend spacetime is discrete, and physics at scales much longer than the lattice spacing won’t depend on which lattice we choose. We can introduce fictitious particles that just happen to cancel out all the higher energy effects (the Pauli-Villars regularization). We can pretend that the dimension of spacetime is a complex number , and simply delete any terms that are badly behaved as approaches the integer we care about. It doesn’t matter. As long as we’re looking at energies well below our regularization scale, we see the same effects.
They call this phenomenon universality. Ken Wilson got a Nobel Prize for providing the details.
And this is the answer to our question: when we perform computations in renormalized QFT, what we’re doing is computing the leading order universal behavior of any regularized path integral. (I’ve always wondered if there’s something homological going on here: regularized path integrals are cocycles, and when we do renormalized QFT, we’re taking equivalence classes, passing to cohomology.)
So at this point, you should be wondering “why is the leading order behavior of regularized path integrals described by renormalizable QFTs?” Wilson’s answer is beautifully simple. Start with any QFT you want; the Lagrangian can have renormalizable interactions and non-renormalizable ones, can have infinitely many terms. Pick a regularization scheme, maybe cut-off at some large energy scale . We want to study the low-energy behavior of our system, so we should change variables to isolate this behavior.
This change of variables is exactly the renormalization procedure: We choose a way of measuring our theory’s physical coupling constants, usually by scattering particles off each other at some low energy . Then we re-express all of our computations in terms of these chosen physical coupling constants. Some of the coupling constants will correspond to renormalizable interactions, and some of them won’t.
But here’s the beautiful thing: The physical coupling constants depend on , and when we study the limit where is very small (equivalently, when we study the limit where is very large), what we find is that all of the non-renormalizable interactions go to zero. Only the renormalizable interactions are non-zero in the low-energy variables. And the approximation remains valid as long as we only study physical effects where the characteristic energy scale is much less than .
That’s why particle physics is described, at low-energies, by renormalizable theories. They’re what you get, no matter what you start with.