jump to navigation

An editable database tracking freely accessible mathematics literature. January 3, 2014

Posted by Scott Morrison in papers, publishing, Uncategorized, websites.
Tags:
comments closed

(This post continues a discussion started by Tim Gowers on google+. [1] [2])

(For the impatient, go visit http://tqft.net/mlp, or for the really impatient http://tqft.net/mlp/wiki/Adv._Math./232_(2013).)

It would be nice to know how much of the mathematical literature is freely accessible. Here by ‘freely accessible’ I mean “there is a URL which, in any browser anywhere in the world, resolves to the contents of the article”. (And my intention throughout is that this article is legitimately hosted, either on the arxiv, on an institutional repository, or on an author’s webpage, but I don’t care how the article is actually licensed.) I think it’s going to be okay to not worry too much about discrepancies between the published version and a freely accessible version — we’re all grown ups and understand that these things happen. Perhaps a short comment field, containing for example “minor differences from the published version” could be provided when necessary.

This post outlines an idea to achieve this, via a human editable database containing the tables of contents of journals, and links, where available, to a freely accessible copy of the articles.

It’s important to realize that the goal is *not* to laboriously create a bad search engine. Google Scholar already does a very good job of identifying freely accessible copies of particular mathematics articles. The goal is to be able to definitively answer questions such as “which journals are primarily, or even entirely, freely accessible?”, to track progress towards making the mathematical literature more accessible, and finally to draw attention to, and focus enthusiasm for, such progress.

I think it’s essential, although this is not obvious, that at first the database is primarily created “by hand”. Certainly there is scope for computer programs to help a lot! (For example, by populating tables of contents, or querying google scholar or other sources to find freely accessible versions.) Nevertheless curation at the per-article level will certainly be necessary, and so whichever route one takes it must be possible for humans to edit the database. I think that starting off with the goal of primarily human contributions achieved two purposes: one, it provides an immediate means to recruit and organize interested participants, and two, hopefully it allows much more flexibility in the design and organization of the collected data — hopefully many eyes will reveal bad decisions early, while they’re easy to fix.

That said, we better remember that eventually computers may be very helpful, and avoid design decisions that make computer interaction with the database difficult.

What should this database look like? I’m imagining a website containing a list of journals (at first perhaps just one), and for each journal a list of issues, and for each issue a table of contents.

The table of contents might be very simple, having as few as four columns: the title, the authors, the link to the publishers webpage, and a freely accessible link, if known. All these lists and table of contents entries must be editable by a user — if, for example no freely accessible link is known, this fact should be displayed along with a prominent link or button which allows a reader to contribute one.

At this point I think it’s time to consider what software might drive this website. One option is to build something specifically tailored to the purpose. Another is to use an essentially off-the-shelf wiki, for example tiddlywiki as Tim Gowers used when analyzing an issue of Discrete Math.

Custom software is of course great, but it takes programming experience and resources. (That said, perhaps not much — I’m confident I could make something usable myself, and I know people who could do it in a more reasonable timespan!) I want to essentially ignore this possibility, and instead use mediawiki (the wiki software driving wikipedia) to build a very simple database that is readable and editable by both humans and computers. If you’re impatient, jump to http://tqft.net/mlp and start editing! I’ve previously used it to develop the Knot Atlas at http://katlas.org/ with Dror Bar-Natan (and subsequently many wiki editors). There we solved a very similar set of problems, achieving human readable and editable pages, with “under the hood” a very simple database maintained directly in the wiki.

Man and machine thinking about SPC4 June 29, 2009

Posted by Scott Morrison in crazy ideas, link homology, low-dimensional topology, papers, Uncategorized.
comments closed

I’ve just uploaded a paper to the arXiv, Man and machine thinking about the smooth 4-dimensional Poincaré conjecture, joint with Michael Freedman, Robert Gompf, and Kevin Walker.

The smooth 4-dimensional Poincaré conjecture (SPC4) is the “last man standing in geometric topology”: the last open problem immediately recognizable to a topologist from the 1950s. It says, of course:

A smooth four dimensional manifold \Sigma homeomorphic to the 4-sphere S^4 is actually diffeomorphic to it, \Sigma = S^4.

We try to have it both ways in this paper, hoping to both prove and disprove the conjecture! Unsuprisingly we’re not particularly successful in either direction, but we think there are some interesting things to say regardless. When I say we “hope to prove the conjecture”, really I mean that we suggest a conjecture equivalent to SPC4, but perhaps friendlier looking to 3-manifold topologists. When I say we “hope to disprove the conjecture”, really I mean that we explain an potential computable obstruction, which might suffice to establish a counterexample. We also get to draw some amazingly complicated links:

SPC4 link

(more…)

Have someone else write your bibliography June 14, 2009

Posted by Scott Morrison in papers, the arXiv, websites.
comments closed

Whenever I’m finishing off a paper, at some point I have to sit down and clean up all the references, which generally look something like \cite{Popa?} or \cite{that paper by Marco and co}. Wouldn’t it be nice if someone else could do the rest?

If you don’t already know about it, one great resource is mathscinet, which will produce nicely formatted BIBTEX entries for you (example). If you want to be even more efficient, you can wander around mathscinet, saving articles to your “clipboard”, and then ask mathscinet to give you the BIBTEX entries for everything at once. (After you have articles on the clipboard, follow the “clipboard” link in the top right of the page, then select BIBTEX from the drop-down box and click “SaveClip”.)

If you’re even lazier, you could use the two command-line scripts that I use (download find-missing-bibitems and get-mathscinet-bibtex and put them on your path; you’ll need linux/OSX/cygwin to run). Now, when you cite items in LaTeX, cite them via their mathscinet identifiers, e.g. \cite{MR1278111} instead of \cite{Popa?}. Now, if you usually type latex article to compile, and bibtex article to generate the bibliography, you can also type find-missing-bibitems article, and all the missing BIBTEX entries will appear! For example, after adding \cite{MR1278111} somewhere in my text, the output of find-missing-bibitems article is

@article {MR1278111,
    AUTHOR = {Popa, Sorin},
     TITLE = {Classification of amenable subfactors of type {II}},
   JOURNAL = {Acta Math.},
  FJOURNAL = {Acta Mathematica},
    VOLUME = {172},
      YEAR = {1994},
    NUMBER = {2},
     PAGES = {163--255},
      ISSN = {0001-5962},
     CODEN = {ACMAA8},
   MRCLASS = {46L37 (46L10 46L40)},
  MRNUMBER = {MR1278111 (95f:46105)},
MRREVIEWER = {V. S. Sunder},
}

If you’re brave, you could run something like

find-missing-bibitems article >> bibliography.bib

to automatically append any missing entries to your BIBTEX file. The really enthusiastic could incorporate this script into the standard latex-latex-bibtex-latex cycle.

Really, I like to have more in my BIBTEX file: I generally use the note field to include a link to the mathscinet review, and a link to the DOI for the paper on the publisher’s webpage. If available, I want a link to the arxiv version of the paper too, for people without institutional access to the published version. Currently, the scripts can’t do this automatically, but it’s might not be much more work. Maybe next time.

This paper was written for our blog August 21, 2008

Posted by David Speyer in Algebraic Geometry, mathematical physics, Number theory, Paper Advertisement, papers, representation theory.
comments closed

I’ve recently been reading a paper which ties together a number of this blog’s themes: Canonical Quantization of Symplectic Vector Spaces over Finite Fields by Gurevich and Hadani. I’m going to try to write an introduction to this paper, in order to motivate you all to look at it. It really has something for everyone: symplectic vector spaces, analogies to physics, Fourier transforms, representation theory of finite groups, gauss sums, perverse sheaves and, yes, \theta functions. In a later paper, together with Roger Howe, the authors use these methods to prove the law of quadratic reciprocity and to compute the sign of the Gauss sum. For the experts, Gurevich and Hadani’s result can be summarized as follows: they provide a conceptual explanation of why there is no analgoue of the metaplectic group over a finite field. Not an expert? Keep reading!

(more…)

Gale and Koszul duality, together at last July 14, 2008

Posted by Ben Webster in category O, combinatorics, hyperplanes, mathematical physics, papers, the arXiv.
comments closed

So, in past posts, I’ve attempted to explain a bit about Gale duality and about Koszul duality, so now I feel like I should try to explain what they have to do with each other, since I (and some other people) just posted a preprint called “Gale duality and Koszul duality” to the arXiv.

The short version is this: we describe a way of getting a category \mathcal{C}(\mathcal{V}) (or equivalently, an algebra) from a linear program \mathcal{V} (or as we call it, a polarized hyperplane arrangement).

Before describing the construction of this category, let me tell you some of the properties that make it appealing.

Theorem. \mathcal{C}(\mathcal{V}) is Koszul (that is, it can be given a grading for which the induced grading on the Ext-algebra of the simples matches the homological grading).

In fact, this category satisfies a somewhat stronger property: it is standard Koszul (as defined by Ágoston, Dlab and Lukács.  Those of you with Springer access can get the paper here).  In short, the category has a special set of objects called “standard modules” (which you should think of as analogous to Verma modules) which make it a “highest weight category,”  such that these modules are sent by Koszul duality to a set of standards for the Koszul dual.

Of course, whenever confronted with a Koszul category, we immediately ask ourselves what its Koszul dual is.  In our case, there is a rather nice answer.

Theorem. The Koszul dual to \mathcal{C}(\mathcal{V}) is \mathcal{C}(\mathcal{V}^\vee), the category associated to the Gale dual \mathcal{V}^\vee of \mathcal{V}.

Now, part of the data of a linear program is an “objective function” (which we’ll denote by \xi) and of bounds for the contraints (which will be encoded by a vector \eta).  Stripping these way, we end up with a vector arrangement, simply a choice of a set of vectors in a vector space, which will specify the constraints.

Theorem. If two linear programs have same underlying vector arrangment, the categories \mathcal C(\mathcal V) may not be equivalent, but they will be derived equivalent, that is, their bounded derived categories will be equivalent.

Interestingly, these equivalences are far from being canonical. In the course of their construction, one actually obtains a large group of auto-equivalences acting on the derived category of \mathcal{C}(\mathcal{V}), which we conjecture to include the fundamental group of the space of generic choices of objective function.

(more…)

New Photograph March 25, 2008

Posted by A.J. Tolland in low-dimensional topology, papers, quantum algebra, talks, tqft.
comments closed

Last Friday, we had a seminar at Berkeley — or rather, at Noah’s house — featuring Mike Freedman and some quantity of beer. Mike spoke about some of the hurdles he had to overcome in writing his recent paper with Danny Calegari and Kevin Walker. One of the main results of this paper is that there is a “complexity function” c, which maps from the set of closed 3-manifolds to an ordered set, and that this function satisfies the “topological” Cauchy-Schwarz inequality.

c(A \cup_S B) \leq max \{c(A \cup_S A),c(B \cup_S B)\}

Here, A and B are 3-manifolds with boundary S. [EDIT: and equality is only achieved if A = B] This inequality looks like the sort of things you might derive from topological field theory, using the fact that Z(A \cup_S B) = \langle Z(A), Z(B) \rangle_{Z(S)}. Unfortunately, it’s difficult to actually derive this sort of theorem from any well-understood TQFT, thanks to an old theorem of Vafa’s, which states roughly, that there’s always two 3-manifolds related by a Dehn twist that a given rational TQFT can’t distinguish. Mike speculated that non-rational TQFT might be able to do the trick, but what he and his collaborators actually did was an end run around the TQFT problem. They simply proved that that the function c exists.

I tell you all this, not because I’m about to explain what c is, but to explain our new banner picture. We realized after the talk that there were a fair number of us Secret Blogging Seminarians in one place, and that we ought to take a photo.

oldmathpapers.org, version 0 March 6, 2008

Posted by Scott Morrison in evil journals, papers, the arXiv, Uncategorized.
comments closed

A while ago we discussed the idea of “oldmathpapers.org”, a public repository for maths papers that aren’t readily available online. Many people quickly pointed out that this was a dangerous idea, getting very quickly into the deep waters of copyright violation.

Nevertheless, here’s version 0, ready for your consumption! It neatly sidesteps the whole copyright issue by not keeping copies (or even looking at) actual versions of the paper — it’s simply intended to keep track of links to old maths papers, hosted elsewhere. That elsewhere, of course, is meant to be your and my web pages!

Functionality is extremely limited; you can add a paper, you can list everything there so far, but there’s no searching, no sorting, no deleting, no correcting. On the other hand, I think that won’t be too too hard to add. The most important thing to note about the design of “oldmathpapers.org” is that it relies on MathSciNet identifiers to keep track of things. These exist for pretty much every published maths paper, and they’re a ready source of high quality metadata — and it’s this that will hopefully make the searching and sorting easy.

Below, I’ll walk you through adding a paper: “Canonical bases in tensor products and graphical calculus for U_q(sl_2)”, by I. Frenkel and M. Khovanov. After that, please take a moment to contribute some old math papers!

(more…)

Proofreading October 29, 2007

Posted by Ben Webster in papers.
comments closed

It strikes me as a real structural problem of the mathematics community that math papers are on the whole simply not very well proof-read. Many papers, sometimes very important ones, are often full of errors, in large part because it is simply a very time-confusing and difficult task to check a math paper for errors. On a more general level, I think most math papers could use a good once over by a talented editor who also understands the mathematics. Too bad such people are in short supply, and generally have better things to do.

I’ll confess that proofreading is sometimes a bit of a problem for me, probably more so than for some more detail-oriented people. While there’s been quite a range in the number of errors found by the referee, I’ve gotten a couple of reports on papers I’ve submitted that found quite a few errors, and in fact, I received just such a report (for my paper with Geordie Williamson) just yesterday.

What’s most dispiriting about such a report is that it’s not as though I didn’t proofread the paper, more than once (as did Geordie, and at least one other mathematician). I just went right over a number of errors that become glaring when the referee pointed them out. And while it’s generally pretty easy to fix whatever the referee actually points out, one knows that there are yet more errors hiding in there (especially in this case, where the referee indicates that they lost patience, and didn’t carefully proofread the whole paper), and it seems hopeless to think you will catch all of them.

Does anyone have recommendations other than Ritalin? Of course, it would be best if one could dispatch a horde of flying monkeys blog readers to proofread one’s papers for one, but that doesn’t seem like a sustainable plan (not that I wouldn’t appreciate any comments readers have on the paper. It’s on my webpage here. Don’t look at the arXived version. That’s out of date).

Zotero! September 26, 2007

Posted by Ben Webster in papers, the arXiv.
comments closed

Has anyone else tried Zotero yet? It’s a Firefox add-on intended to help you organize the papers you use in your research (organize notes, make bibliographies [it seems to have bibtex support], etc.). It still seems to have some kinks that need working out (for example, it seems to work better with the arXiv’s website than the Front’s), but it does look promising. And if it actually allows me to keep track of which arxiv articles I have downloaded on my computer, it will be invaluable for that alone (at the moment, it’s much easier for me to download the PDF from the web again rather than locate it in my downloads folder, a ridiculous state of affairs).

Zeta function relations and linearly equivalent group actions August 29, 2007

Posted by Ben Webster in Number theory, papers, representation theory.
comments closed

Since we’ve already had one post on the relationship between group theory and algebraic number theory, I don’t see any reason to stop, so I thought I would write about some old research I did as an undergraduate (6 years ago! yeesh).

Now, every number field has a zeta function (often called the Dedekind zeta function), generalizing the Riemann zeta function. Now, I hope you will all remember that the Riemann zeta function (which is the zeta function of the rationals) is defined by

\displaystyle{\zeta(z)=\sum_{n\in \mathbb{Z}_{>0}}n^{-z}=\prod_{p \text{ prime}}(1-p^{-z})^{-1}}.

If you take a more general number field K, then the Dedekind zeta function of that field is a similar sum or product over the ideals \mathcal{I}_K or primes \mathcal{P}_K of K.

\displaystyle{\zeta_K(z)=\sum_{\mathfrak n\in \mathcal{I}_K}N(\mathfrak n)^{-z}=\prod_{\mathfrak p \in \mathcal{P}_K}(1-N(\mathfrak p)^{-z})^{-1}}.

Here N(\mathfrak n) is the norm of \mathfrak n.

So how can we understand this function? (more…)

Follow

Get every new post delivered to your Inbox.

Join 638 other followers