One reader was curious if we had anything to say about the recent preprint by Xian-Jin Li entitled “A proof of the Riemann hypothesis”. Unfortunately, analytic number theory seems to be a weak spot of the mathematical blogosphere, so none of us seemed inclined to go through the paper and look for mistakes. Luckily, Terry Tao did and thinks he has found a mistake (which the author may claim to have fixed…things are starting to get a little confusing). Alain Connes also seems to be unconvinced. Oops.

Which leaves the rest of us to wonder what happened. I mean, this paper looked promising precisely because it didn’t look like the work of a crackpot. Li has a Ph.D. from Purdue (in mathematics) and is a mathematics professor at Brigham Young, and analytic number theory is his research area. He has several other unsuspicious articles on the arXiv, and the style of his Riemann hypothesis article is wholly unremarkable (considering that it claims to prove probably the most celebrated open problem still at large in the mathematical world). Why would someone risk the level of embarrassment involved in putting a proof of RH which had not been really thoroughly vetted on the arXiv, apparently with no warning (whether it can be fixed or not, if Terry Tao found a problem in less than 24 hours after it was placed on the arXiv, it definitely was not vetted thoroughly enough before being released on the world. It’s also on its 4th version on the arXiv in 3 days)? What was the hurry?

I can’t really speak to Li’s situation, since I don’t know the guy. It may well be that he sent his preprint to Tao and Connes and they didn’t get around to reading it. But if he didn’t, that was a huge mistake on his part, one which definitely makes him look more crackpotty than I expect he wants. If he didn’t give any conference talks on the subject before releasing the preprint, that was a huge mistake. Honestly, I think putting it on the arXiv, where it will remain forever, taunting him, rather than his personal webpage was something of a mistake. After all, you want a chance to get comments from the people who might be able to point out any mistakes you made before you end up on Slashdot. While this goes double, or perhaps n-uple for some large n if trying to prove an important problem like RH, I think it’s a good point in general that you should tell people about your work while it is still in its formative stages. It could save you a lot of pain. Admittedly, some people worry about being scooped, but I feel like this is the sort of thing that people are naturally more paranoid about than they should be. Ultimately, it would be better if we shared all our good ideas. After all, if somebody else does something cool with an idea you had, that just makes you look smarter for having such a good idea.

[*Ed. – I changed the title of this post, since the original one was a bit more inflamatory than I intended*]

I take some issue with your use of the term “crackpot”. What you wrote above is good advice if you want to avoid embarrassing yourself. Advice on how not to be a crackpot would go something like: “If someone explains why your reasoning is incorrect, listen to them rather than to the delusions of grandeur.”

Anyways, Li seems to have made an honest mistake. I doubt anyone will care by next week.

analytic number theory seems to be a weak spot of the mathematical blogosphereLOL! And that’s because the maths blogosphere is n-cat crazy, with evil plans to eventually prove RH using omega cats and non standard analysis.

I can see why he didn’t tell anyone, I mean if there’s anything someone would want to steal, it’s going to be something like this.

What always gets me is the number of new versions they end out putting on the arxiv. The ultimate example was Susumu Oda, a real mathematician, who has put about 100 versions of his various attempts at the Jacobian Conjecture on the arxiv. You wonder why they don’t quit after n tries. Maybe the thought of having an incorrect proof of a big theorem on the arxiv for all to see is so unbearable they have to replace it as soon as possible. Also, you’d think having witnessed others fall into this trap, no one would repeat this. But I guess the thought of being the next Perelman kills off all rational judgement

When I was a young assistant professor at a rather unremarkable place, I had a good friend whom we sent on to graduate school at MIT. While there, he “vetted” his thesis progress with young assistant professor then at Princeton, who shall remain nameless. A few weeks before he was going to hand his thesis in, guess what? The young very interested assistant professor at Princeton scooped him. My good friend never received a Ph.D.

I also had an unfortunate experience. The AMS memoir I published was based on my Ph.D. thesis. My advisor and I sent out copies of my thesis. Guess what? The distinguished senior mathematician who did a seminar on my thesis and wanted to publish jointly, and who shall remain unnamed, reworked my stuff a bit and published it himself. He did reference my work, but misrepresented it. Our papers arrived at Acta almost simultaneously. Unfortunately, my theorem is generally attributed to him. And I got to spend the rest of my career at places with 12 to 15 hour teaching loads.

Bottom line: With the amount of money and prestige riding on the Riemann Hypothesis, Li was smart not to talk to much before sending his stuff up to the archive, even if it backfired on him. I hope he can pull the coals out of the fire.

Note to the newly initiated: For less important stuff, by all means vet to avoid embarrassment.

AJ- well, maybe that was me sticking my foot in my mouth. I was trying to use the difference between “so-and-so is being a crackpot” and “so-and-so is a crackpot” (much like the difference between “so-and-so is being a jerk,” and “so-and-so is a jerk”). But that’s a subtle point, and wasn’t coming off well in the title. So, I changed it. Hah. Now your comment looks silly.

Kea- you clearly have no inkling of my actual evil plans. Though, I suppose that’s probably a good thing.

Alonzo- That’s why you tell multiple people. Not that even putting things on the arXiv is enough to keep people from trying to take credit for your ideas, as evidenced by the hooplah around Geometrization.

I think that while you’re right that someone in that situation should circulate the manuscript before they put it on arxiv, there will always be somebody who succumbs to the urge to just put it up there. I’m sure that Ji was really, really sure that his proof was right; it takes a lot of self-knowledge to know not to be believe that feeling.

The real lesson I think is to not get that excited when something like this appears on arxiv. There have bunches of these false alarms for famous problems now, and there has been exactly one true alarm. People find false proofs all the time, and some percentage of them are going to end up on arxiv. Famous problems are just going to attract their share of false proofs, and if we’re not an expert in the area we should probably just ignore any preprints like that until someone tells us they’re they are right.

Ben, there’s enough shady behavior going around I can see someone not wanting to tell a bunch of people… what if one turns around, says he showed Li the result a while ago and says now Li’s trying to scoop him. Maybe I’ve been unlucky but I’ve seen enough to believe just about anything is possible in the lovable world of academia.

The sad thing to me is now Li is going to be known as the guy who said he proved the Riemann Hypothesis. I hadn’t heard of him before personally. This is one disadvantage of the Brave New World of blogs and arxivs (not that there aren’t many advantages). Fifteen years ago if he had done this, he might have at worst submitted it to the annals, gotten Connes to explain a few things to him, and that’s that. Now he’s been humiliated.

Alonzo, he’s hardly the first guy in this position re RH. His advisor is well known for saying he proved the RH.

Actually, one can blot out one’s record on the arXiv. Do a search for works by that most celebrated academic ‘A.N.Other’ and you’ll see what I mean.

Here’s a serious question. At what point do you put your work on the arXiv? That’s a fairly general question and could be answered either as advice, opinion, or current practice – I’m interested in all (or more if I’ve missed one).

I guess behind this is the question: what is the arXiv for, exactly? In particular, with regards to maths since, as is being discussed elsewhere, physics and maths are not the same.

The arXiv is touted as a “preprint server” and so one would think that it serves the job that used to be done by handing out rough typewritten scripts to passing mathematicians in the hope that one of them might read it and pass comment on it. In this case, putting something up that may not be correct is and should be completely acceptable. However it seems that its status has been somewhat elevated beyond this to the “just about to be submitted server” – that’s how I tend to treat it – but with this scenario putting something up for comment is much less acceptable.

Of course, things like the Riemann Hypothesis are somewhat special; partly due to the Clay prizes. There are conflicting incentives here. I have nothing to say about Li or his paper (I have no intention of even trying to read it).

Who Knows ? Maby Xian Li is the new “Gregori Perelman” while using the internet to present a solution to a very important problem RH !

One answer is that all that’s going on is that anybody can make a mistake, and you’re not a crackpot if you can simply admit it. If Li’s paper does turn out to be wrong, then if I were him I would certainly replace the paper with a retraction. By the arXiv’s rules, the old versions are still available, but the withdrawal notice is then the default version.

On that note, here is a partial list of withdrawn math papers in the arXiv. Some of these were withdrawn for other reasons, but in most cases, the proof was bad. Do I think less of good mathematicians such as Wolfson or Dranishnikov because they made mistakes? Do I think that they have egg on their faces? No. They said oops, they took it back, no big deal. If you post a paper to the arXiv, you shouldn’t crawl up the wall because it might have a mistake.

On the other hand, it is advisable to take precautions. Old versions of arXiv papers remain available, and for good reasons. When I feel a little too elated at having proved a new result, then I don’t trust myself and I look for colleagues (victims) willing to vet my ideas. Before posting to the arXiv, in these cases.

I have heard the theory that you shouldn’t because someone might steal your work, but I don’t buy it. Sometimes I just don’t feel ready to tell other people what I’m thinking, but for the most part it’s helped me a lot to just trust the community. It’s not that no one steals ideas; that does happen now and then. What is true is that you’re much more likely to lose credit by being secretive than by being open. The way that it usually happens is that someone else has the same idea, and you have no case that they stole from you because you didn’t tell anybody.

Anyway, another answer is that it’s harder than you might think not to be a crackpot. It’s very tempting to simplify the research community into a clean dichotomy between competent people and crackpots. This dichotomy is sometimes true enough and necessary in practice, as when you’re organizing a conference. But it is also an oversimplification and it can be unfair and hypocritical. It’s like the social dichotomies in high school: “Here is where the cool crowd sits, and over at that table are the retards.” All it takes to be a crackpot is to let strong opinions outrun your real expertise. You can be a crackpot in some areas while you are still competent in others. This is especially likely in difficult, attention-grabbing areas such as quantum computation and string theory. But strong opinions can also be very useful in research, even though they’re dangerous. That’s the breaks.

Andrew, I read “preprint server” as “a place to put pretty-much finished papers so they’re out there staking your claim while they languish in referee hell”.

It may well be that he sent his preprint to Tao and Connes and they didn’t get around to reading it.Li met with Connes at Vanderbilt in May. I don’t know the extent of their discussions.

It may well be that he sent his preprint to Tao and Connes and they didn’t get around to reading it.Okay, I said before that anyone can make a mistake and it’s not the end of the world if you withdraw an arXiv article. Of course, if the question is as big as the Riemann Hypothesis, it’s more than merely prudent to convince a private audience first. As Ben says, it really does cause problems if you rush forward where angels fear to tread with big conjectures. It’s not nearly as bad if you post a timely retraction, but it still sucks.

What makes it even worse is that it has to be the right private audience. You don’t get to share the blame just because colleague so-and-so merely nodded along at crucial points of the proof. But, lest anyone get the wrong impression from Ben’s remark about Tao and Connes, you aren’t automatically entitled to any time from specific famous people. Getting people to read or hear your proof is like asking someone out on a date: they have every right to say no. It’s fine to be disappointed by inattention, but it’s a mistake to cross the line from disappointment to criticism.

Basically, if you think that you’ve proved a big conjecture, there are traps on all sides. It’s not an enviable position at all, unless by by your good fortune you’re actually right.

Andrew Stacey also asked about the purpose of the arXiv. The arXiv is so well established in mathematics now that its a priori purpose is a secondary matter. To be sure, the people who maintain it have certain goals, policies, and responsibilities, which define some part of a purpose. The arXiv is intended for serious research-level communication. But, within that broad scope and the rules, its purpose is whatever you make of it. It certainly isn’t just a preprint server, because its articles, which are also called e-prints, are permanent. Indeed, because it’s so big, a lot of people see the arXiv as a better guarantor of the historical research record than journals. (I’m one of those people.) It is not a better guarantor of mathematical validity than journals, and no one should think that it is. Although (a) validity is not the same issue as permanence, and (b) because of self-policing aided by moderators, arXiv articles are almost as likely to be valid as journal articles.

What, no comment about a new short proof of the Poincar\’e conjecture http://arxiv.org/abs/0807.0577 ??

I agree with Kuperberg. Anyone can make a mistake, not just crackpots. If Li’s work is generally solid but it turns out he overreached here, this particular Arxiv posting should not have any lasting impact on how his future work is received.

In a perfect world, Math would be about ideas and results, not credit and reputation, and the Arxiv serves this ideal very well. But comments about “staking your claim”, “stealing”, “scoop”, “crackpot”, etc indicate that things are not perfect, and so judging the role the Arxiv plays in mathematics gets complicated.

Apart from Li’s paper there is the following interesting issue. There are two extreme ways to practice math (with many altenatives in between.) One way is to work secretly on a big problem, to tell nobody or very few people about it, to discuss with nobody the techniques you are using, and then after many years to astonish the world with a preprint or a lecture) presenting the solution. The other extreme way is to work while at any time discussing your thoughts and ideas with everbody (perhaps also on blogs), write papers with partial progress and conjectures etc.

The advantage of the first avenue is not just the fear that somebody will use your ideas but also that it helps the researcher to stay concentrated, and avoid outside preasure and distractions of various types. A clear disadvantage of the first avenue is that feedbacks from others can be useful at intermediate stages of the process towards a mathematical discovery.

The first avenue, had spectacular successes in the last few decades.

(BTW, wrong fantastic results published on the archive are ususally refuted very quickly.)

“…you aren’t automatically entitled to any time from specific famous people. Getting people to read or hear your proof is like asking someone out on a date:”

True, but even if no one with a Fields medal would bother to read my proof of RH, I could try to eat my way up the food chain, convincing people who could convince people,…., who could convince Tao or Connes. So it that way it’s not like dating… oh, maybe it is.

Incidentally, there is at least one analytic number theorist in the maths blogging community: Emmanuel Kowalski. In adjacent fields, we have for instance Izabella Laba in combinatorial number theory and Jordan Ellenberg in algebraic number theory, among others. (We could always do with more maths bloggers in any field, of course.)

I find that collaboration is an excellent way to cut down on the risk of error or other embarrassment, and is in any case more fun than solo research. One might be concerned that having a joint author on your “best” papers may somehow look bad on your resume, but I have not seen this to be the case (except perhaps if

allof one’s papers are joint with a single, and significantly more senior, mathematician). The four papers cited in my laudationes in Madrid, for instance, were all joint, albeit with four different sets of authors.Finally, withdrawing a paper is embarrassing in the short term, but it is the professional thing to do when an error comes up (as Li has just done), and if done promptly there is not much lasting damage done to one’s reputation, and it can even be a useful learning experience. (I myself have withdrawn two papers, one due to an arithmetic error (!) and the other because the result had already been proven years ago; I know now to check for these things before releasing a paper. Admittedly, these two papers were on much lower-profile problems than RH.) It’s only when one steadfastly refuses to acknowledge errors in one’s manuscript that have been widely pointed out that one begins to come off as a bit odd.

This is an interesting discussion, and I find Greg’s comments particularly interesting.

One facet of my personality is that I prove things by first finding a plan and running it through badly (mistakes, omissions, incorrect assumptions…) and then filling in the holes. So if I would post/submit too early in the project, I would post garbage… Writing joint papers as Prof. Tao recommends is the best insurance against this happening, if only because coauthors make one more careful.

There is one thing I’m concerned about, although I don’t know how common its most extreme form is… somebody can post result X. You need result X, but the proof has significant holes. You have to fill in those holes, but will get zero credit for it because it’s that other dude’s paper. Filling in those holes can take an inordinate amount of time and effort and can be thankless and very annoying… This has happened to me at least 4 times in the last 3 years, in 2 cases resulting in huge amounts of “wasted” time (because the proofs turned out to be wrong and I had to redo them or surrender).

Comments/advice/input???

About comment 19: for me, claiming to have proved a theorem when one knows that one relies on a fact for which the available “proof” has “significant holes” is close to unethical, and is certainly not mathematics as I see it (it is fine to write things as being conditional, and to point out the holes for explaining why the statement is only conditional).

Serre, in his letters to/from Grothendieck has what I found a significant comment concerning similar issues: Grothendieck was stating (something like) that SGA V was “complete”, and Serre was arguing that not because various diagrams had not been checked to commute, and he states: “when something as important (to me) as the Ramanujan conjecture depends on it”, this is not a simple detail (I don’t have the book handy to check the actual words).

There is also a very critical survey by Novikov of some problems created by similar wrong/incomplete proofs in topology in “Classical and modern topology. Topological phenomena in real world physics”, GAFA 2000 (Tel Aviv, 1999), Geom. Funct. Anal. 2000, Special Volume, Part I, 406–424. In one of the worst case, some people “reproved” an important theorem without realizing that the (unpublished) proof of one crucial result they used depended itself on what they were trying to prove.

somebody can post result X. You need result X, but the proof has significant holes. You have to fill in those holes, but will get zero credit for it because it’s that other dude’s paper.It certainly does happen that you have to rely on a result whose published proof is inadequate. And this can open up a can of worms — but not getting credit for bridging gaps is not the problem. The real problem is that even though you certainly should mend any results that you plan to use, other people are very sensitive to any implication that there is anything wrong with their results. I have botched this issue several times, and not even for the noble purpose of needing the results that I slighted. From my point of view I meant no ill will, but the credit is too delicate an issue simply not to intend to offend others; you have to actively intend not to offend.

Whether a published proof is complete is somewhat subjective and circumstantial. For instance, Perelman’s proof of the Poincare conjecture was not complete by any plebian standard; but given the stakes, it was essentially complete. On the other hand, as long as you don’t pretend that you’re the one proving the Poincare conjecture, it was and is entirely professional to publish better or more complete proofs of steps in his program.

As that example shows, the key question is whether there is more to learn from your written proof of whatever theorem or lemma than whatever is already published. If there is, then you won’t get “zero credit”. You probably will get less credit than if you were first — mathematics, like most research communities, are somewhat overinvested in “firstism”. But you will get credit; moreover, reliability and meticulousness are assets for your long-term reputation. If filling in these holes is time-consuming, then within reason you are entitled to punt. As Emmanuel says, you can publish a paper that says “Here is a proof of X using published theorem Y. And I will publish my own proof of Y later.”

But you have to figure out what to say about the prior theorem Y without stirring up trouble. By far the best method is to contact the author of theorem Y so that you two can agree on a consistent description. Maybe that author would want the proof to be called incomplete — that’s what I wanted when someone found a gap one of my papers. Or maybe it would be better to call your proof a second proof, or even just a second explanation of the same proof.

It’s relatively rare for other mathematicians to play extreme hardball if you warn them in advance, and if you yourself are reasonable. It does happen, although even then, you usually have a lot of latitude to find some respectful description of a prior proof. Again, it can be easy to walk into a quarrel, but you have every incentive to walk back out of it.

One might be concerned that having a joint author on your “best” papers may somehow look bad on your resume, but I have not seen this to be the case (except perhaps if all of one’s papers are joint with a single, and significantly more senior, mathematician).I would say, on the contrary, that you are more likely to get too much credit for a joint paper than too little. I want to make the point very carefully, because I certainly don’t think that joint work in mathematics is any way less worthy or less legitimate. What is true is that when people look at your research from a distance, it is relatively easy to expand your CV with joint papers. It is a lot of work to write 16 fresh, solely authored papers. It is much less work to have 16 papers with four authors each. The bureaucracy typically treats 16 papers with four authors as at least half as much achievement as writing 16 papers all on your own — certainly not as 1/4 as much achievement. Again, it’s different when people look at your work in detail, but not everyone can.

Yes, you can be a bit overshadowed if you have a lot of papers with much more senior, more famous people. But even that is usually a fair correction that still leaves you with a residual boost from joint credit. Again, I want to make the point carefully, because it is perfectly legitimate to establish yourself by doing joint work with someone more famous. But for every one of those, there is more than one graduate student or postdoc who never truly becomes independent from his or her advisor.

Isn’t it pretty harsh to throw around the word “crackpot” in this case? This guy is a good, serious mathematician who has devoted his life to advancing mathematical knowledge, and has made good contributions, and he deserves major props for that. So he tried to prove the Riemann Hypothesis, and had a mistake. Well good for him! Aren’t you guys a bit too caught up in this world of genius math where probably the most advanced mathematical results are declared “trivial” and making a serious attempt to prove the Riemann hypothesis can somehow be embarrassing? Just because Terence Tao caught a mistake in 24 hours doesn’t mean Li didn’t get enough other people to check his paper first. I mean, come on, this is Terence Tao we’re talking about. Terence Tao can do things in 24 hours that a team of good mathematicians could not do in their entire lives.

Terry Tao said “I myself have withdrawn two papers, one due to an arithmetic error (!) and the other because the result had already been proven years ago”

From that I infer that the worst case is scenario is proving correctly RH and then realizing that in fact someone else proved it years ago.

Aren’t you guys a bit too caught up in this world of genius math where … making a serious attempt to prove the Riemann hypothesis can somehow be embarrassing?The attempt is not itself embarrassing. What’s embarrassing is how Li handled the announcement and distribution of the proof. He ended up causing a lot of fuss, attracting considerable attention, and wasting many people’s time. There’s no evidence that he had shown the proof to any expert before posting it to the arXiv, and probably this could all have been avoided if he had. I agree that the term “crackpot” sounds a little unfair, since it suggests incompetence, but in at least one aspect it’s not so far off. Specifically, crackpots typically operate in ignorance or disregard of norms for professional behavior. I find it really mystifying why Li would think “Gee, I’ve proved the Riemann hypothesis. Time to write it up and, before showing it to anyone, post it on the arXiv!” This is not such a terrible sin, and it would easily have been forgiven if he had been right (like Perelman was). However, since he was wrong, it leaves him responsible for the fuss and wasted time.

In this case though, it didn’t seem like it exactly took Connes a long time to find the error. Since it was based on his own stuff he probably just spent a few minutes, at most an hour. He mainly made himself look like a crackpot. One sign he’s not is the fact he admitted his error within three days. Usually they never give up, and you get to see their dozens of versions over several months.

Z: “The worst case is scenario is proving correctly RH and then realizing that in fact someone else proved it years ago.”

Something like this actually happened with the Stark-Heegner theorem — that the only imaginary quadratic number fields with unique factorization are where a is one of − 1, − 2, − 3, − 7, − 11, − 19, − 43, − 67 and − 163.

Heegner offered a proof, which was believed to be flawed. Stark then gave a proof, shortly followed by Baker. Later, Stark realized that there was only a very minor hole in Heegner’s proof, and filled it!

The class number one story is a bit tragic, since Heegner died a few years before he was vindicated. I have heard a few potential explanations why no one gave his proof more consideration. One of them was that he (as an electrical engineer) did not know the right mathematicians to whom he could advertise/explain his techniques. Another possible reason was that adelic techniques were rather fashionable in number theory at the time, and he didn’t use them. Apparently, some people get offended if you use old techniques to prove new theorems.

Li did the right thing by promptly withdrawing his paper when there was a hole found in it. From what I understand, he had a hard time finding a job after he published a refutation of de Branges’ proof of the RH, so being looked down on for this latest act probably won’t phase him much. He is certainly not a crackpot, has contributed serious research into the RH, and if he is guilty of a great crime by putting on the arxiv a paper which wasn’t looked at by enough other experts, so are (I would guess) a majority of those of us who use the arxiv.

I’ve learned by sad experience that whenever I feel adrenaline pumping through my veins, the result I’ve just proven is almost certainly wrong.

There’s at least two other fairly recent significant results in number theory where there was serious skepticism, based in part on the style of presentation (from what I understand), but where the ideas turned out to be correct:

(1) the proof that zeta(3) is irrational, by Apéry. I’ve heard first-hand accounts of his early lectures on this, and it seems they were pretty odd, and it took some work for people to accept that the ideas were correct;

(2) Mihailescu’s proof of the Catalan Conjecture; there it was Yuri Bilu who was the first to publicly validate the proof.

I’m curious about comment #4 by Chip Neville. Why wasn’t the student granted his PhD. If he was about to submit, then he had done the research, and surely deserved to receive the PhD. Being “scooped” by a few weeks shouldn’t change that. There just has to be more to this story. It sounds like some people really set out to squash this guy. What really happened?