Jadagul writes:

Got a draft of the course schedule for next year. Looks like I might get to teach real analysis.

I probably need someone to talk me out of trying to do everything in R^n.

A subsequent update indicates that the more standard alternative is teaching one variable analysis.

This is my second go around teaching rigorous multivariable analysis — key points are the multivariate chain rule, the inverse and implicit function theorems, Fubini’s theorem, the multivariate change of variables formula, the definition of manifolds, differential forms, Stokes’ theorem, the degree of a differentiable map and some preview of de Rham cohomology. I wouldn’t say I’m doing a great job, but at least I know why it’s hard to do. I haven’t taught single variable, but I have read over the day-to-day syllabus and problem sets of our most experienced instructor.

Here is the conceptual difference: It is quite doable to start with the axioms of an ordered field and build rigorously to the Fundamental Theorem of Calculus. Doing this gives students a real appreciation for the nontrivial power of mathematical reasoning. I don’t want to say that it is actually impossible to do the same for Stokes’ theorem (according to rumor, Brian Conrad did it), but I never manage — there comes a point where I start waving my hands and saying “Oh yes, and throw in a partition of unity” or “Yes, there is an inverse function theorem for maps between -folds just like the one for maps between open subsets of .” I think most students probably benefit from seeing things done carefully for a term first.

Below the fold, a list of specific topics much harder in more than one variable. If you have found ways not to make them harder, please chime in in the comments!

• No need for linear algebra. Just defining the multivariate derivative uses the concept of a linear map, and stating the chain rule requires you to compose them. If you want your students to ever be able to check the hypotheses of the inverse function theorem, they have to be able to check if matrices are invertible.

• One variable Riemann sums are so nice! If is an increasing bijection, and is a partition of , then is a partition of ; the -substitution formula for integrals follows immediately. If is a smooth bijection, and we have a partition of into rectangles, its image in is quite hard to describe. This is why the change of variables formula is such a pain.

• Regions of integration: In one variable we always integrate over an interval. In many variables, we integrate over complicated regions, so we need to describe the geometry of complicated regions. If you want to give a region up into simpler pieces, you need to introduce some rudimentary notion of "measure zero", to make sure the boundaries you cut along aren't too large.

• Improper integrals: In one variable, we always take limits as the bounds of the integral go somewhere. In many variables, there are uncountably many different limiting processes which could define .

And that's not even getting into manifolds, or multilinear algebra…

Thanks for your response! I posted this on tumblr but I’ll put it here too in case anyone else here has any comments.

That actually refocuses me/answers a lot of questions, because it hadn’t really _occurred_ to me that I might cover integrals in the course. But now that I look at the syllabus from this past year, that’s exactly what the last professor did.

The Analysis I course I took in undergrad definitely had linear algebra as a prerequisite. And it covered the definitions of “metric” and “norm”, convergence of sequences in different spaces and norms, and built up at the end to completeness and compactness.

In the second term we mostly spent a lot of time talking about convergence of sequences of functions, but we also talked about Riemann integrals of functions from R to an arbitrary Banach space. (The professor felt there was no reason to talk about Rieman integrals over other domains, since the Lesbesgue integral subsumes all of that). We did the inverse and implicit function theorems, and at the end of the course we defined the Frechet derivative, which I think of as one of the defining moments of my growth as a mathematician.

I don’t think I had to compute a single integral in the entire year.

(It wouldn’t even have occurred to me to count things like Stokes’s theorem as a possible topic for the course; I have that classes with differential geometry or something instead).

—

And this sort of answers my question about why people would do the single-variable version: it lets you do derivatives and integrals. I was imagining a course that was just about convergence and completeness and compactness, and I don’t know that that course benefits much from a restriction to a single variable—especially since most of the interesting convergence properties happen for sequences of functions.

But if people are thinking of the course as “rigorous calculus” rather than “the background you’d need for functional analysis to make sense” then that choice makes a lot more sense.

Yeah, its up to your departmental politics and your own tastes whether it is “rigorous calculus” or “the background you’d need for functional analysis to make sense” . I’ll say that it is a lot of fun to see a student who got a 5 in AP Calc but learned no theory suddenly understand what was going on behind the scenes.

These are exercises I have not done (or have forgotten), but often think about: What is: A continuous function [0,1] → (C⁰([0,1]),||-||_∞)? How about a Differentiable function [0,1] → (C¹[0,1], sobolev_1)? You see, I’m trying to think of ways that a developed understanding of 1-variable analysis might be amplifiable to multivariable, but… I don’t really know if they work.