I’m here at the 2009 Georgia Topology Conference and Mike Freedman is about to start talking about the current proposal for building a topological quantum computer. I’ll try liveblogging his talk; there’s a copy of the slides at http://stationq.ucsb.edu/docs/Georgia-20090518.pptx (PowerPoint only, sorry!) if you want to see the real thing. I think he recently gave a version of this talk in Berkeley recently, so some of you may have already heard it. I’ll fail miserably at explaining everything he talked about, but ask questions in the comments!
Mike says that the point of the talk will be to explain how it is that there’s a “topological” approach to building a computer, and try to give an idea of the mathematics, physics and engineering problems involved.
Mostly the group at Microsoft Station Q thinks about the fractional quantum hall effect. This is a “2DEG”, a 2-dimensional electron gas. Building such a thing in a lab is hard work; we live in a 3-dimensional world after all! It all happens in interface between two slabs of gallium arsenide, at huge magnetic fields (~10T) and very cold temperatures (mK). At certain magnetic fields, “gapped” or “incompressible” states appear. These states are characterised by the “filling fraction”, the ratio of electrons present to the number that will fit in the “lowest landau level”. As you vary the magnet field, you see plateaus in the resistance of the sample, at integer values (phenomenally accurate!) of
. You also see, at very high fields, plateaus at rational values as well. There’s a good plateau at 5/2, and others at 7/2 and 12/5. Just as the BCS theory of superconductors describes “quasiparticles” (effective particles, in the BCS case comprised of a pair of electrons), the FQHE has excitations described by quasiparticles with very strange properties. To begin, they have fractional electric charges! Every better, they have interesting statistics. In our usual 3d world, the spin-statistics theorem ensures that all particles are either fermions or bosons, that is, that they have half-integer spins. Roughly, you should think that
is just
, while
; this means that you can have “anyons” with arbitrary spins (i.e., particles that pick up arbitrary phases when you rotate them by
). Stranger than that, you can have non-abelian anyons. Anytime you have particles in a 2-dimensional system, the braid group acts on them (actually, their quantum mechanical Hilbert space) in the usual way. Saying that the particles are either bosons or fermions simply means that this is the trivial or sign representation. They’re non-abelian anyons if any commutator in the braid group survives. Conjecturally, there are “complete” non-abelian anyons, when the image of the braid group in the unitary group of the Hilbert space is dense.
Mike then gave a brief history of the physics behind the FQHE: the classical hall effect in 1879, the prediction of the (integer) QHE in 1975, and the 1980 observation of the first few plateaus, resulting in a Nobel Prize for von Klitzing, as well as incredibly accurate calibration techniques. (In fact the ohm is now defined in terms of the integer QHE.) Shortly after, in 1982, and resulting in further Nobel Prizes, the fractional QHE was observed (at and
, and partially explained by Laughlin. This explanation only worked with an odd denominator, and so the observation of a
state in 1987 by Willett and Pfeiffer caused some problems! In 1991, Moore and Read proposed an amazing answer — an explanation of the 5/2 state in terms of the conformal field theory associated to
at level 2. There’s now lots of work to do to observe the many predictions coming from this description. Conversely, several experimental groups visit Santa Barbara twice a year to present their amazing experimental data (look at Mike’s slides for examples). Mike then presented a few examples of slides of data from these groups, especially a striking one called “Reproducibility” that shows that the quantum mechanical states in FQHE systems are stable over timescales of up to a week! This is especially promising as we expect to be able to do the elementary operations required for quantum computing in something like a microsecond.
Now it’s time for Mike’s “briefest history of numbers”. This begins with futures contracts for sheep in 10000BC Anatolia, with unary notation (5=11111), that’s appropriate enough for everyday objects. Somewhere around 1000BC, place notation was invented in various places, which allows you to write exponentially large numbers, appropriate for combinatorial objects and statistical physics. Even more recently, with the discovery of linear algebra and Hilbert spaces, we’ve realised we need “even bigger” numbers to describe quantum mechanical linear superpositions of states. (ed: huh?)
The idea of topological phases of matter goes all the way back to Lord Kelvin in around 1867. Tait had build a machine that produced smoke rings, and even knotted smoke rings, and this attracted Kelvin’s attention, who began thinking about “knots in the aether” as a basis for chemistry; the classification of small knots perhaps corresponded to the nascent classification of elements, and the ability of knots to link was perhaps a basis for chemistry. This idea didn’t pan out, and it took another century for knots to return to physics.
It’s somewhat surprising that topology can enter into fundamental physics. The Hamiltonian describing interacting electrons is far from topologically invariant. On the other hand, the Jones polynomial for knots was explained by Witten in terms of the Chern-Simons theory, which has an Lagrangian which is easily seen to be topologically invariant. How are we going to tweak an actual physical system in order to get topological invariance? The answer is a bit of a cop-out, but seems roughly plausible. The basic idea is just that at very low temperatures, terms in a Hamiltonian that involve fewer derivatives matter much more than those with more derivatives. The Chern-Simons action only has one derivative, while kinetic energy has two, and so even if you don’t (or can’t!) know where a Chern-Simons term arises, you expect that if it’s there at all it’s going to dominate at low temperatures.
What is a topological state, exactly? It’s a system with a stable degenerate ground state. Degenerate ground states in quantum mechanical systems aren’t unusual, but they nearly always arise because of symmetries of the system, and as soon as you break that symmetry the ground state splits. Stable degeneracies are harder to engineer! Further, the different states in this ground state must be seperated from each other as far as local operators are concerned. This means that any `compactly supported’ Hamiltonian acts trivially on the ground state subspace.
The precision of the degeneracy, and the precision with which nonlocal operations can be implemented is controlled by tunneling amplitudes which can be incredibly small. These tunnelling amplitudes are exponentionally suppressed by the length scale of your system (think about the simplest QM problem of tunnelling through a wall). The hope is that this will do away with the need of quantum error correction algorithms, which are currently the focus of much work on the theoretical computer science side of quantum computing.
(a short technical interruption, when Mike’s laptop turned itself off… oops!)
The most important experimental tool for observing the properties of the topological phases is the interferometer. There are a bunch of pictures, including electron micrographs, in the slides. These let you measure the electric charge of the quasiparticles (including a recent measurement of for the
state. They also let you measure things like the specialisations of the Jones polynomial of the Hopf link at the root of unity corresponding to the appropriate level. The original idea for quantum computation was to use the braiding in Chern-Simons theory to approximate an arbitrary unitary (the “quantum algorithm”). Unfortunately, while there’s been recent progress in building interferometers and measuring topological quantities using them, no one has had much idea how to actually braid quasiparticles around each other! The new approach uses “forced measurement” to simulate a physical braiding.
I’m a little behind, so I’ll write a description of forced measurement later. You can also look up the Bonderson-Chetan-Freedman paper on the arxiv.
By now, we’re 90% sure that the state corresponds to $SU(2)_2$, while we think that
corresponds to $SU(2)_3$. There are various advantages to either from the practical point of view of building a computer. From a mathematical point of view, 12/5 is great, in particular because the braiding is universal. On the other hand there are many engineering issues which look much better for 5/2; in particular that the
state really does uncontroversially exist in the lab!
Mike finished by encouraging all quantum topologists to think just as much about quantum physics as about mathematics. (Maybe it’s a pity I haven’t really followed that advice myself!)
It was very nice meeting you Scott. I am reading my way through Zee as all this Chern-Simons stuff depends on it. Have you met Zee?
Cool stuff!
In a paper on categorification I claimed in a half-joking way that decategorification began when people were trying to count sheep. So, it’s very charming to hear that counting “began with futures contracts for sheep in 10000 BC Anatolia, with unary notation (5=11111).” I’ll have to check out the history!
Somewhere around 1000BC, place notation was invented in various places…
Quite appropriately named, then!
Indeed great stuff. Question: when said: “By now, we’re 90% sure that the
state corresponds to
, what does it mean? What is the other 10% possibility?
Hmm… As far as I know there’s no great alternative for the 5/2 state. For the 12/5 state, there’s the “Bonderson-Slingerland hierarchy”, described here and with numerical evidence presented here.
Sadly, I don’t know how to translate these “hierarchy” ideas into fusion category language.
Hi Scott,
That’s a great summary! It makes me wish I could have seen the talk.
Let me just add one small but important clarification. This is something Mike (and many others) are well aware of but is often overlooked in summary talks that don’t have time to delve into too many details. You mention that the topological approach to quantum computing will hopefully “…do away with the need of quantum error correction algorithms…”. This would be great if it were true, but there is currently no conclusive evidence that we will be able to get rid of so-called “active” error correction and use only the “passive” topological protection from our errors. In fact, there are even some no-go theorems that suggests that a self-correcting quantum computer (one that requires no active error correction) may not be possible. There are still plenty of loopholes to exploit, however.
The bottom line is that the jury is still out on whether the Hamiltonian for a topological quantum computer can be described that is intrinsically error-free. But it is certainly one of the more promising routes to building a large-scale quantum computer, and contains a wealth of wonderful mathematics and physics.
This would be great if it were true, but there is currently no conclusive evidence that we will be able to get rid of so-called “active” error correction and use only the “passive” topological protection from our errors.
I think that Mike Freedman’s work is great and I certainly agree that quantum topological invariants lead to important quantum fault tolerance methods. However, I don’t draw the same distinction between “active” and “passive” fault tolerance, and I don’t know that topology truly is a separate approach to building a quantum computer. The question of whether you can get rid of “active” error correction could simply be a non-question.
Consider the situation with classical error correction. “Active” error correction is error corrected accomplished in software. “Passive” error correction is accomplished in hardware. But what’s software and what’s hardware? Which of these words described ROM? Which describes patterned media? One way to make a fault tolerant classical bit is with the Ising model below its phase transition. If you simulate the Ising model with a C program, then sure, that’s software. If you make a small lump of iron which is very similar to the Ising model, then sure, that’s hardware. But what if you make an Ising model, or similar, with coupled spin memory cells?
It is true that fault tolerance in existing classical computers is extremely hardware-dominated, to the point that hardware engineers are almost the only computer engineers who have to think much about faulty gates. That is, their main goal is to fabricate gates that are almost never faulty. But already in the case of hard drive storage rather than gates, that is beginning to change.
You could conjecture that fault tolerance in good quantum computers will similarly be hardware-dominated or materials-science dominated. Or you could conjecture the opposite. Either way, this is a fuzzy conjecture, not a rigorous yes-no conjecture.
Even if quantum fault tolerance proves to be software-dominated, the TQFT methods could still be useful for getting a good fault tolerance constant with low overhead. I think that there is a lot left to figure out even at the mathematically clean level of software, and not just at the level of messy condensed-matter physics.
The bottom line is that the jury is still out on whether the Hamiltonian for a topological quantum computer can be described that is intrinsically error-free.
It seems to me very likely that you could make an artificial 2D Hamiltonian, with a fairly complicated local interaction, that would accomplish this. I think that the jury is still out as to whether it’s realistic to create such a Hamiltonian with materials science.
On the last point, there’s Mike’s 2000 paper that says you can construct a somewhat unrealistic but nevertheless ‘local’ 2d Hamiltonian with gapped ground state the SU(2)_3 Hilbert spaces.
Oops, the gap is just a conjecture, there.
Just having a gap is not the problem if you don’t care about realism. You can make a local Hamiltonian with an energy gap from any finite unitary spherical category using the Turaev-Viro model. Mike is doing more work to obain a gap because he does care about realism.
The harder part of this question is to make a dynamical system that puts the system in its ground state (or approximately its ground state) at low temperatures. In other words, the model has to have not only a gap, but also a low-temperature phase transition. That is the part that hasn’t been done for the Fibonacci category, for instance. But I would conjecture that there is a way to do it.
Hi Greg,
Let me clarify what I was trying to say. We don’t know of any gapped local Hamiltonian on a lattice (in 3 or fewer spatial dimensions) with constant strength interactions that exhibits topological order at finite temperature. Finding such a model would be the first step (in my opinion) to finding what I would call a self-correcting quantum memory.
You are absolutely right that I’m thinking of a quantum analog of a classical bit encoded in the Ising model below the critical temperature. Regardless of the fuzzy line between hardware and software error correction, we don’t have explicit constructions of such a model. In the absence of such a model it seems likely that error correction via a combination of hardware and software techniques will be necessary.
Let me clarify what I was trying to say. We don’t know of any gapped local Hamiltonian on a lattice (in 3 or fewer spatial dimensions) with constant strength interactions that exhibits topological order at finite temperature. Finding such a model would be the first step (in my opinion) to finding what I would call a self-correcting quantum memory.
Here are my thoughts on that point. I believe that it is similar, maybe even equivalent, to propose a quantum fault tolerance scheme which is implemented locally and autonomously, in the sense of a quantum cellular automaton. Although in order to remove entropy, it would be a QCA with quantum operations, and not just unitary operators.
There is the well-known construction from Caltech of such a QCA in four dimensions that is not QC-universal, but does store a qubit. It uses a homological additive quantum code on a four-dimensional lattice, using middle homology. The error syndrome of the code consists of closed loops, and the QCA shrinks the loops, on average, so that they tend to disappear.
In a 2D quantum code, or even a 3D code as far as anyone knows, the error syndrome can be 0-dimensional, i.e., a quasiparticle. There is no obvious way using a local dynamic to make the quasiparticles clump and cancel.
However, this is local in the sense of hard-local. Although I have only heard about it as a rank amateur, I have heard that there is something called a “Kosterlitz-Thouless transition” that is based on a soft-local attraction of quasiparticles in two dimensions. The attraction has a 1/r or 1/r^2 law. I would suppose that you can make a soft-local QCA in two dimensions that protect a qubit and that is based on a surface code; in fact I think that it would be similar to multiscale error correction procedures that have already been simulated. And I would optimistically conjecture that there is even a version of this for any finite fusion category, for instance the Fibonacci category which is known to be universal for quantum computation.