Not Even Blogging

Since we’ve been on a physics kick lately, you may want to scoot over and watch Peter Woit talking to Sabine Hossenfelder at bloggingheads.

Probably the most interesting part (to me, at least) is the discussion of the difference between math and physics culture.  I often have a vague sense that these differences exist (mostly in ways that make me happy that I stayed in mathematics), but often wonder whether I am making them up.  Well, one data point in my favor.

I’d also like to riff a little bit on the issues brought up by Sabine. I hadn’t encountered her previously (and I bet a lot of you haven’t).  For reference, she is a physics postdoc at Perimeter, who blogs at Backreaction.  She has some pretty strong words for the academic community as a whole, and how it directs research.  

I agree with her about some of the problems, and some of our differences are in part because of math/physics cultural difference (for example, postdocs being tied to specific projects isn’t a real problem in mathematics, AFAIK, and mathematicians seem to feel less pressure to work on the flavor of the month).  At the same time, I feel like she keeps skirting around a couple of very simple points: money and time (I’ll say beforehand, I’m mostly addressing this post.  If Sabine made these points somewhere in her archives, that’s fine.  This post is not intended as a criticism, just a few of my own thoughts on the subject).

First, money. She says academics shouldn’t be under so much pressure to publish (or to produce research by whatever metric) on short time scales, and should be funded for longer periods.  Now, maybe I’m oversimplifying, but I would say the current pressure to publish stems from the mismatch between the supply and demand for physicists and academics in general.  It’s an arms race, because no matter how much research you do, there is always somebody who published more who might take your job.  There’s never going to stop being an arms race of one sort or another until supply and demand are brought back into line.  You can change who has an advantage in this race, by moving the goalposts, but you can’t get rid of it.

The only way out I can see is increasing demand by putting more money into science (probably from government sources, though private funding is starting to make itself visible), or reducing supply by concentrating our current funding on fewer people, earlier on, thus forcing the others out of the field (or hopefully, directing them into some kind of non-academic employment).  Both of these are reasonable suggestions (I really think the US could benefit a lot in the long term by implementing the former. Write your congresscritter!), but neither will be an easy sell.

(An interesting side point: why should it be more expensive to fund people for longer?  I mean, giving 5-year grants shouldn’t be more expensive than giving the same people 1-year ones.  I think the problem is that this ignores people leaving academia.  More concretely, if there are X new Ph.D.’s and Y tenure track jobs, and it takes you 5 years to go from one to the other, it costs as much to give (X+Y)/2 of them 5 year posdocs as it does to start by giving them all 1-year postdocs, and culling out (X-Y)/5 of them each year.  Of course, this is a horrifyingly reductive model, but I think makes it clear where the expense comes in.)

The second is time, specifically the huge amount of time involved in trying to evaluate research.  She says straight up that we should stop using “shortcuts” to evaluate people’s research, and just judge it on its “quality.”  Maybe I’m just being too cynical, but I think that’s just too much to expect. Consider the numbers: one colleague of mine at a respectable, but not extremely prestigious state university (a “Group II” department, according to the AMS) told me that they received 800 applications for a tenure-track position, and another told me that at a similar university told me they received 300 for a postdoc.  No one is assiduous enough to read the reseach of that many people; it’s simply not practible. That’s a full time job itself.

But there’s a deeper problem here: people cannot be relied on to make unbiased judgements about other people’s research. Repeat after me: no one is objective. If hiring decisions are made based on whose research is “more creative,” on who seems smarter, with no reference to hard (if unfair) metrics like publications or teaching evaluations (not that I’m endorsing these without reservation;  keep reading), this just tilts the playing field more toward the well-connected, the people with good references, (and probably away from women and minorities, given people’s habits of making unfair snap judgements against them).  Obviously, I’m no fan of our current publication system, but giving people from the periphery a chance to credential themselves is very important.

So, let’s be honest with ourselves; all hiring and funding decisions are going to be made based on shortcuts of one sort or another, and trying to sweep that under the rug probably just means using weirder and less fair shortcuts.  I think as a community, we need to think hard and have a serious conversation about which metrics will be most fair to people. I don’t have any easy answers, and it won’t be a pleasant conversation, necessarily, but it’ll be a lot more productive than looking for some kind of fake objectivity.

7 thoughts on “Not Even Blogging

  1. Hi Ben,

    Thanks for your comments. I agree with you – and with Peter – that a cause of the problem in physics is a PhD overproduction, which means that people have to be sorted out somehow. But there’s ways to do that, and ways not to do that, that was my point. If you sort out people according to criteria that lets survive those with strategies that don’t lead to progress you’ll get stuck. Regarding the question whether it would be more or less expensive to fund people longer: this isn’t a question of total money, it’s a question of commitment. And that commitment is in many cases unfortunately missing (that’s why I wrote: have faith).

    Regarding the time it takes to judge on somebody else or his/her work. The idea that time has to be saved in such a process is a cultural phenomenon strongly tied to the problem that people think it to be a waste of their own time providing this judgement (not to mention that time is money). Sitting in a hiring committee, reading through piles of research statements, CVs letters, takes a lot of time, and it’s a work that is not appreciated appropriately to the importance of that task. That’s why I have suggested (here) that a prime goal should be to ease that time pressure, eg. by allowing scientists to specialize in task (which would also lower the pressure to specialize in field). Division of labor and responsibilities isn’t exactly a new idea, and it has worked pretty good in many fields. I think it could be beneficial if researchers were not forced to do literally everything at once.

    Further, you are of course correct that scientists as all other humans will never be objective in their judgement. But one can at least try to get as close to it as possible, and for science to work that’s what we have to do. We will never be perfect at it, but we should try to do the best we can. And that is presently not the case. Best,


  2. this isn’t a question of total money, it’s a question of commitment. And that commitment is in many cases unfortunately missing (that’s why I wrote: have faith).

    It’s definitely a question of the ratio between money and people. “We should fund people for longer periods” is another way of saying “We should give up on the people we can’t fund longer term earlier in their careers,” which doesn’t sound quite so pleasant, especially to all the people out there who struggled a bit in grad school and found their feet as postdocs. It may be an improvement over the present situation, but it will be pretty unpleasant for some people.

  3. Hi Ben,

    Can you expound on what’s suboptimal with the present
    “reference letter + papers + job talk to make sure you’re alright” approach to R1 hiring?

    I agree, nothing’s perfect — but it seems to me that this has turned out reasonably well. For example, letter writers who falsely say someone is the next Gauss will eventually have their letters ignored. Conversely, established writers with accurate histories who say so and so is “so creative” ought to be noted. People who are unconnected and prove major results surely will become connected, no?

    I recognize there’ll be counterexamples, but all and all, looking at mathjobs wiki this year, things seemed to make reasonable sense, if not necessarily at the interviewing level, at least at the final hire level.

    Departments all want to improve, and no one wants a lemon. I pretty dense about these things, but I don’t see how numerical metrics will improve upon common sense of those deciding who to spend several decades with.

  4. Anon-

    Maybe I shouldn’t pronounce too much on the subject, since I haven’t really seen the innards of the hiring committee process. I’ll just note that I don’t in any way support hiring purely by formula; obviously, things like the impression made in the interview have to be taken into account. This was in response to a post suggesting that ANY “shortcuts” (e.g. paying attention to which journal publication appeared in rather than their content) was misguided.

    That said, I think there are some pretty obvious flaws in depending strongly on reference letters. I mean, one shouldn’t expect mathematicians to somehow be above bias, which can happen for any number of reasons. In particular, I honestly believe that without even realizing it, a lot of references will hold different groups, for example, men and women, to different standards. Not to mention that it would be terribly surprising if the letter-reader’s feelings about the reference didn’t slip into their judgment of the referred (again, without malicious intent. What makes these things so pernicious is that they happen often without us even knowing it).

    It’s also known that a lot of bizarre Kremlinology goes into the reading of reference letters which could easily lead to interpretation errors (since “everyone knows” that reference letters are written more glowing than the subject really deserves, but no one is sure how much more glowingly).

    I’m just saying that having some hard, easily comparable data could be a useful reality check. Of course, this has dangers of its own, but I think could still be useful. If nothing else, it would be helpful if junior people (grad students, postdocs, etc.) had some real idea of what would help their chances and what won’t, because my experience is that other than knowing more papers is good, people are pretty in the dark.

  5. Dear Ben,

    “…had some real idea of what would help their chances and what won’t, because my experience is that other than knowing more papers is good, people are pretty in the dark.”

    My opinion is that the thing that really helps get in the door the R1 level is a truly exciting result, something that experts (those biased ones!) will agree in recommendation letters is important or noteworthy. Obviously solving the Riemann hypothesis helps, but I’m talking about more reasonable versions of “exciting”.

    After that, I’d check that they’re publishing a reasonable amount in general, have some sort of coherent research agenda (not just a one off thing), wonder about whether that exciting result was work with an advisor, make sure they could give a decent job talk and seem reasonable as a person, ask if they’ll add in other ways (organization, advising at various levels), and of course, wonder if they’ll ever actually accept an offer from us.

Comments are closed.