##
Quantum mechanics and geometry *November 16, 2009*

*Posted by Scott Morrison in crazy ideas, differential geometry, quantum mechanics.*

trackback

trackback

Here’s a nice little story about quantum mechanics, which surprisingly few mathematicians seem to know about. The essential idea is “quantum mechanics on the projective space looks remarkably like classical mechanics”! Everything I say here comes from two papers **Geometrical Formulation of Quantum Mechanics** (gr-qc/9706069), Ashtekar and Schilling, and** ****Geometry of stochastic state vector reduction** (#), Hughston. If you’re interested in more details, I’d encourage you to read these papers — they’re well written and contain many further references.

As you’ll recall, quantum mechanics says that systems are described by Hilbert spaces, with states given by vectors. I’ll stick with finite-dimensional systems (e.g. particles with spin) for simplicity, but this isn’t essential for what follows. A particular self-adjoint operator H, called the *Hamiltonian*, governs the dynamics of the system via the *Schrodinger equation* . Quantum mechanics also says something about measurement, which we’ll come to in a moment.

Now the Schrodinger equation defines a one parameter flow via . This preserves the unit sphere in our Hilbert space, and descends to a flow on the projective space. The projective space is naturally a Kahler manifold, and in particular a symplectic manifold, so we immediately ask if this flow is Hamiltonian. The answer is unsurprising but underappreciated: yes, the flow is Hamiltonian, and the Hamiltonian function is just the expectation value of the Hamiltonian operator .

The example you should have in mind at this point is a simple spin 1/2 system in a magnetic field, whose Hilbert space is , Hamiltonian . The projective space is and the Hamiltonian function we get as the expectation value is just the usual coordinate of the standard embedding of in . The Hamiltonian flow rotates points along lines of latitude, completing each orbit in units of time (go calculate the unitary).

Eigenvectors for the Hamiltonian operator correspond to critical points for the Hamiltonian function, and in particular fixed points of the flow. (That’s the north and south poles in the example above.) The flow described above is just a rigid rotation of the sphere, and in fact this is generally true: the flow on projective space coming from a self-adjoint operator is *Killing*, that is, it preserves the metric. This is the first appearance of the metric, but it’s really essential, because the converse of this statement is true — Hamiltonian functions whose corresponding flows preserve the metric are **precisely** those which arise as expectation values of self-adjoint operators on the Hilbert space.

That’s not all the metric is good for! Quantum mechanics also tells us something about what happens during “measurement”. This is that when a “measurement” (yes, I’m going to keep using scare quotes, so you’re not allowed to argue with me about what measurement means) occurs, the system jumps discontinuously to one of the eigenvector of the Hamiltonian, and the probabilities of reaching the the various different eigenvectors are given by the absolute value squared of the inner product of the current state and the eigenvector. This probability is exactly , where is the metric distance between the current state and the corresponding fixed point. (In the spin 1/2 example, let’s normalise this metric so it just measures angles between points on S^2.)

It gets even better, but at this point I’m going to stop talking about the conventional description of quantum mechanics, and begin describing a *proposed modification* of quantum mechanics. Physicists have already thought a lot about whether modifications like this are reasonable, but I’ll postpone that for now. At this point, if you’re reading the actual articles, we’re switching from the Ashtekar/Schilling paper to the Hughston one.

So what is this proposed modification? Well, let’s imagine the symplectic flow as some differential equations describing the trajectory of our state. We now want to add in a stochastic term, in particular an isotropic *Brownian motion* term with an amplitude that depends on the position in the projective space. This amplitude will be (some simple function of?) the *energy uncertainty*, namely the quantity . In fact, this energy uncertainty is exactly the squared velocity of the symplectic flow with respect to the metric. In our spin 1/2 example this velocity is (remember we have rigid rotation) and since , . What happens? Well, at the fixed points it’s easy to see that the energy uncertainty is zero, so we might expect that the Brownian motion term drives the state away from areas with high energy uncertainty, towards the eigenstates — just like what is supposed to happen during “measurement”. This is precisely what happens: Hughston does a lot of financial mathematics, and he knows his stochastic calculus. His Proposition 5 says the energy uncertainty in this model is a supermartingale, that is, an on average decreasing function. As time passes, you expect to end up at one of the fixed points, each with various probabilities. Note that these are honest, stochastic probabilities, not just numbers we’ve declared to be interpreted as probabilities as in the naive set up. (ED: see below for Greg’s comment on this.) His next result, of course, is that these probabilities match up with what we want, namely that they are given simply by metric distances on the projective space.

I think this is a beautiful picture. The measurement process is now something more concrete, a stochastic term in the governing equation, and we can resume thinking probabilistically about quantum mechanical probabilities.Very roughly, you’re meant to think that in an “isolated quantum system” the stochastic term is extremely small, and symplectic flow dominates. On the other hand, during a “measurement”, presumably when the system is coupled with the macroscopic world, the scale of energy uncertainties becomes extremely large and the stochastic terms dominates, and the system is quickly driven to a fixed point of the symplectic flow.

You have to think hard, however, about where this stochastic terms comes from, and what it means. Hughston has some ideas about quantum gravity, but I’m not so sure I like them! There are also lots of no-go theorems ruling out stochastic variations on quantum mechanics, and I have to admit to not being clear about whether these results affect Hughston’s model.

A final idea for further thought, from the Ashtekar/Schilling paper: we can fully describe quantum mechanics solely in terms of the Kahler manifold structure of the projective space, so why not drop the requirement that it’s a projective space? That is, can we imagine systems on other Kahler manifolds? It seems that all we lose is the fact that on any two points have a canonical through them — i.e. that we’re allowed to form linear superpositions of states. Is this really essential? Where might we look for finite dimensional systems described by “exotic” Kahler manifolds? And all you quantum topologist gallium-arsenide engineers out there — how might we try to make one?

## Comments

Sorry comments are closed for this entry

Note that these are honest, stochastic probabilities, not just numbers we’ve declared to be interpreted as probabilities as in the usual set up.I really don’t agree with the distinction that you are making here. My model of quantum probability is that it is the tensor category of von Neumann algebras. What you call “honest” probabilities are already defined by expectations in commutative von Neumann algebras. So what is “dishonest” about the immediate noncommutative generalization?

@Greg: I’ve modified the above “usual set up” to “naive set up”. I agree that there’s nothing dishonest about quantum probability, and was mostly going for effect. There is still a distinction between the probabilities arising here from the stochastic term and usual quantum probabilities. Setting aside the “correctness” of this model, it is more readily understandable starting from people’s usual ideas about probabilities.

Superselection rules seem to put your states into a disjoint union of projective spaces (e.g., you can’t have a superposition of states with odd and even fermion numbers). This isn’t really an exotic Kahler manifold, but maybe it’s progress. Any unbroken symmetry should cut out some kind of subvariety, but as far as I can tell, they tend to be rather low degree.

Nice entry.

I don’t know if you know, but the list of attempts and repeated and redicovered attempts to find perspectives on quantum mechanics that makes it look like classical mechanics possibly with a stochastic component is long. They are called Bohmian mechanics, or hidden variable theories, or Nelson stochastic mechanics, or Adler’s quantum mechanics or whatnot.

Every now and then some people get very excited about one of these. When I was studying quantum mechanics, I was at one point getting very excited about Edward Nelson’s approach, which is a rather sophisticated impressive insight that shows that the complex Schroedinger equation secretly models a certain real stochastic process.

It should be fair to say that to date and in the foreseeable futute, nobody has a clue what these reformulations of quantum mechanics really imply about reality. It is striking that they exist, though. What is also striking is that they are all mathematically more complicated than the simple Schroedinger equation taken at face value. But the one you describe is certainly among the nicer ones, even though it is also maybe not as close to the “physical realism” that some people are hoping to see in classical models of QM, in that it still uses that funny projective Hilbert space.

Personally, I have come to the suspicion that all these pseudo-classical formulations of QM are like a trap trying to make theoreticians not follow the Dao of mathematics and follow the mathematically most elegant formulation of QM to see where that leads, but instead fall back to believing that they can realize with their more modern tools their early 20th century colleagues dreams and explain QM in terms of classical mechanics after all.

Usually at this point Feynman is quoted with his “blind alley“.

But who knows.

Is my impression correct that stochastic classical models exist only for nonrelativistic QM? Have there been tries to do it with relativistic QFT too?

I agree that there’s nothing dishonest about quantum probability, and was mostly going for effect. There is still a distinction between the probabilities arising here from the stochastic term and usual quantum probabilities.It’s fine to develop the geometry of the projective space of pure states of a finite-dimensional quantum system, just as it’s fine to develop the geometry of a simplex in a finite-dimensional classical system. However, I argue that these authors’ motivation is problematic, regardless of your noble effort to describe everything in favorable and credible terms. They’re trying restore something that isn’t really missing, using a degree of geometry that is interesting but not necessary.

Clearly these authors think of probability in a commutative von Neumann algebra as better than probability in a non-commutative von Neumann algebra. They’re willing to replace the 4-dimensional von Neumann algebra of a qubit, , by the infinite-dimensional commutative von Neumann algebra . Again, this could be interesting, but it is really pretty drastic.

I don’t think of probability in a commutative von Neumann algebra as better than probability in a non-commutative von Neumann algebra. I think of them as the same. This is an ultra-Copenhagen viewpoint that I find extremely useful in QCQI. It is useful even though the only von Neumann algebras that usually arise in QCQI are finite-dimensional matrix algebras. Sometimes you see direct sums of matrix algebras, which of course are the only other finite-dimensional ones.

In particular there isn’t any measurement “problem” in the foundational sense. Measurement is conditional state; it wasn’t a problem commutatively and it still isn’t. It is easy to argue that this the interpretation of measurement is internally consistent: A quantum computer that accesses the qubits of another quantum computer would conclude that conditional state is the correct model for what has happened.

There is an operational measurement problem, which is fine but is not the same thing. If you were asked about the classical stochastic measurement problem of measuring Brownian motion, then that would be a problem too, for instance a problem about modelling microscopes or vibration equipment. It would not be a foundational problem of classical probability.

There are stochastic models for relativistic QM, too, yes. Try googling “Feynman checkerboard” for instance and then chase references from there.

Greg,I like your discussion a lot.

Maybe one can point out that authors talking about classical models for QM (whether or not precisely the ones we are talking about here) usually in the end want to do away with any fundamental probability. They (or some of them) are aiming for a description where the probabilistic picture arises by doing ensemble averages over a non-probabilistic system. I think this is their main point for ordinary “commutative” probability: when you have that, you can imagine that it is the coarse-grained avearage over a non-probabilistic underlying system.

This is not possible with non-commutative probability, I suppose. Or is it? Do you know?

@Urs: thanks. Could those models be taken as consistency proofs for QFT?

Urs,

An answer in category theory terms: A reasonable category to describe classical probability is commutative von Neumann algebras with stochastic maps as the morphisms. The non-commutative version is von Neumann algebras with completely positive maps as the morphisms. They are both tensor categories with the standard tensor structure. (Free probability is another tensor structure which is interesting, but not primarily meant to be realistic.)

You can trivially embed any category in Set, and thus do away with any kind of non-determinism. Also, commutative probability can be interpreted by determinists as an approximation to Set. However, Bell’s theorem says that quantum probability does not even approximately embed in classical probability as a

tensorcategory. More precisely, even the tensor product of two qubits, , does not even approximately embed in the tensor product of two commutative von Neumann algebras of any size. This is true even if you allow arbitrary stochastic arrows in the target category and make no direct appeal to measurement. Coarse graining is one fashion of approximation and it does not save you.To answer Thomas’ question, relativity and quantum field theory are not directly the point. Any commutative model of quantum probability is entirely non-local. Even Newton was astute enough to be bothered by non-local physics. What is true about relativity is that non-local physics becomes empirically impossible and theoretically unreasonable, not just bothersome.

Quantum field theory raises a completely different issue from the truth and interpretation of quantum probability: it uses integrals that are non-rigorous without new ideas. Some of these non-rigorous integrals already arise in classical stochastic field theory.

Thanks, Greg.

Right, so noncommutative probability can not result from ensemble averages in Set. But can it result from ensemble averages in a more exotic world? Maybe in something like “noncommutative classical mechanics”?

Not sure what I mean by that, but this is what your previous comment made me thought of.

Urs: Your question is reasonable, but the only candidate that comes to mind is one that I feel leads to trouble. Hilbert spaces form a tensor category whose morphisms are unitary operators. This is like the Bohm-Schrodinger view of quantum mechanics in which only vector states exist, and in which vector states exist empirically.

Besides that all morphisms in this category are isomorphisms, the category is missing a coproduct-like property of the tensor product. In both classical and quantum probability, there is a map from to . If you transpose it to states, this map is the marginal or partial trace of a joint state. The category of unitary operators is also contravariant with respect to the category of von Neumann algebras. The point is then that there is no unitary operator from to $H_1$. In the unitaries-and-vector-states category, joint physical systems are joint forever; we are all codependent.

Birkhoff and Von Neumann attempted another answer, quantum logic. This answer does have the coproduct-like morphisms, but it seems to miss the point in other ways.

Still, I can also respond by co-opting the question. An ensemble average is itself a classical probability distribution. So maybe the right question is how to make noncommutative probability using noncommutative ensembles of something else. Maybe you could use ordinary classical mechanics as the input to that?

I’ve only thought about this stuff a little, and mainly coming from the interpretation of probability side (since that’s my job, and not any of this Hamiltonian or Hilbert space stuff or whatever), but this does sound interesting. This reminds me a lot of what I’ve heard about the GRW version of quantum mechanics, in that it gets around the measurement problem by saying that there’s nothing qualitatively different about measurements, just quantitatively different. Of course, the GRW theory says that there’s always some probability of collapse, with the probability being very large when coupled with a large system and very small otherwise. This instead says there is never a collapse, but there’s a stochastic element to the continuous evolution of the system, with this term being large enough to quickly pull things towards the “collapsed” states when coupled with a large system, and very small otherwise.

So both of these versions are different from the Bohm, Everett, etc. versions of quantum mechanics where there are no collapses and also no probabilities. This picture and the GRW picture still have the problem of saying what these probabilities mean. (Frequentists will still be unhappy because the universe is a non-repeatable system, so you can’t just count up the number of times the universe behaves one way and the number of times it behaves the other. Bayesians say these still aren’t anyone’s degrees of belief. So these probabilities still have to be some sort of uninterpreted physical probabilities, whatever that might mean.)

Bohm and Everett instead have the problem of saying why it ever even made sense to think of quantum mechanics in terms of probabilities, if the theory is completely deterministic. There’s actually a booming industry in philosophy of physics discussing the proper way to get Bayesian probabilities out of Everettian quantum mechanics, and whether that undercuts any of the probabilistic evidence we have for the general correctness of quantum mechanics.

So it’s interesting to see the outlines of a “no-collapse” theory that still keeps probabilities involved, just to separate these issues in the philosophy of quantum mechanics.

At least if you are happy with explaining classical measurement in terms of sharply-peaked probability densities (as opposed to genuine classical outcomes), that requires nothing outside of standard QM, but just a careful analysis of what standard QM does predict about quantum systems that are coupled to “large” systems (to “baths”).

This is the theory of quantum decoherence.

Anyone interested in these questions I’d strongly recommend to start with looking at literature on this. Because this is all consequences of standard QM, hence _known to be true_ .

After one has grasped the mechanism of decoherence one can still come back and ask onself if one still feels the need to modify standard quantum mechanics. Chances are that this feeling has disappeared by then.

I have to admit my attempts to understand decoherence have always left me feeling unsatisfied. Perhaps I should try harder.

I’m by now ruing having written about the stochastic version. I’m actually way more interested in hearing what people think of the idea of finite-dimensional quantum mechanics on non-projective space Kahler manifolds. Is this totally ridiculous?

I have to admit my attempts to understand decoherence have always left me feeling unsatisfied. Perhaps I should try harder.Scott, you study operator algebras. Draw on your background, and half of Nielsen and Chuang becomes really easy. Maybe easier than they take it to be themselves.

Classical probability is the category theory of sigma-algebras, or commutative von Neumann algebras. In idealized form, a realistic process from the states of a von Neumann algebra A to the states of a von Neumann algebra B is a positive linear map that conserves the expectation rho(1) (since that is total probability). If A and B are finite-dimensional, then this is just exactly a stochastic matrix.

Quantum probability is the same thing with non-commutative von Neumann algebras, including as an important case study finite-dimensional matrix algebras. In that case study, you can identify states with elements using the trace. A realistic map of states is a completely positive map that again conserves the expectation .

As an example, a classical bit, , is a unital subalgebra of a qubit, $\latex M_2(\mathbb{C})$. There is a map of states that takes every matrix to just its diagonal terms. This is complete decoherence: it takes very pure state of a qubit to a classical distribution on two of the states.

You can also have partial decoherence. This could be a contiinuous semigroup of TPCP maps on whose limit is the total decoherence map in the previous paragraph.

Qubits exist in nature. You can take it on faith that all of these maps that I have described are realistic. Or don’t take it on faith: For instance, it is easy to make them by classically averaging unitary operators; those operators can be entirely tangible operations on realistic qubits. The trick is that in order to interpret unitary operators as maps on states, you have to write them in a quadratically as maps on states, . That is the right thing to do if you plan to take classical averages.

I’m actually way more interested in hearing what people think of the idea of finite-dimensional quantum mechanics on non-projective space Kahler manifolds. Is this totally ridiculous?Ridiculous, no. Unnecessary and unlikely, quite possibly.

Regarding Scott Morrison’s questions, we engineers think that Kahler manifolds are an arena that plenty of mathematical “hep cats” are exploring nowadays … an arena whose practical applications include both classical simulations of (e.g.) molecular biology *and* large-scale quantum simulations.

Our UW quantum systems engineering (QSE) seminar discusses simulation techniques exclusively from the Kahlerian geometric point of view (seminar notes here). Although our QSE seminar is mainly concerned with practical issues relating to numerical simulation efficiency, we find that fundamental issues in math and physics definitely *do* arise.

For example, Stephen Adler and Angelo Bassi have a recent commentary in

Sciencewhose title asks “Is Quantum Theory Exact?” that (IMHO) is very enjoyable and thought-provoking.Adler and Bassi argue that gravitational noise induces decoherent dynamical terms into the fundamental quantum equations; this noise (in effect) ensures that quantum trajectories in Nature are always pulled-back onto lower-dimension sub-manifolds of Hilbert space … but if Nature never explores them, do these (unvisited) Hilbert dimensions really exist? Or do they represent what Charles Bennet and and colleagues call “a computational extravagance” and Ashtekar and Schilling call “a technical convenience”?

There is also a triad of researchers at Imperial College, London—Dorje Brody , Anna Gustavsson, and Lane Hughston—whose recent articles (very enjoyably) cover the symplectic geometry of both quantum systems *and* financial systems.

Our seminar finds that financial dynamics and quantum dynamics are more alike than one might think; both are naturally stochastic and symplectic … both quantum entanglement and financial securities are NP-hard to evaluate … and both quantum entanglement and financial security are subject to abrupt disappearance!

The bottom line for engineers of every kind—quantum, classical, and financial—is that pretty much *any* dynamical model that induces a natural pullback of symplectic and metric structures finds a natural expression in the geometric language of Kahlerian mechanics. :)

As a followup to the above, here are three grand challenges of Kahlerian quantum mechanics … suggested mainly to assure math students that easy-to-state, hard-to-meet challenges at the intersection of science, math, and engineering *do* exist in Kahlerian geometry.

The Fundamental Scientific Challenge:Predict the Riemannian curvature of quantum state-space, then measure it.Let’s see … from flat Newtonian space to curved Riemann/Einstein space took about 200 years … but there are more scientists working nowadays … and so we can expect the scientific transition from flat Hilbert space to curved Kahler/(___ your name here___) space is due … any year now!

How hard can it be? :)

The Fundamental Mathematics Challenge:Prove that noisy quantum systems belong to the same simulation complexity class as classical dynamical systems.Here we have in mind a rigorous proof (along the concentration-theoretic lines outlined in the seminar notes) that noisy quantum systems generically belong to the same simulation complexity class as (for example) simulating Navier-Stokes equations.

Of course, no-one has yet proved that the Navier-Stokes equations even

havesmooth solutions in general … but a tremendous amount of good mathematics has come from trying.Quantum concentration theorems have a role similar to quantum ergodic theorems … hard to prove (except in idealized special cases) … but engineering calculations will proceed (with good success) as though quantum concentration theorems apply in general, just as they do at present with regard to ergodic theorems.

The Fundamental Engineering Challenge:Directly observe (by quantum spin imaging) all molecular structures of a size larger than 0.75 nanometer, and reliably simulate the classical and quantum dynamics of every molecular system smaller than 0.75 nanometer.For this third challenge the crossover length scale of 0.75 nanometers is just a guess … but it is an *informed* guess, based on present rapid progress in both imaging and simulation science, that crossover between direct imaging and reliable simulation is coming soon.

Cool! This description of quantum mechanics reminds me of a classic paper by André Heslot:

“Quantum mechanics as a classical theory.” Phys. Rev. D (1985).

Aaron, there are few mathematical ideas that have been (re)discovered as many times as the idea that the Schroedinger equation is a particular case of symplectic mechanics!

Here is a 1975 article by Berezin that is the earliest citation I have that gives (obscurely) the symplectic form of the Bloch equations … if anyone has an earlier citation please post it here!

@article{***,Author = {Berezin, F.A.}, Journal = {Commun. Math. Phys. (West Germany)}, Number = {2}, Pages = {153 – 74}, Title = {General concept of quantization}, Volume = {40}, Year = {1975}}

I must admit to not having “grasped the mechanism of decoherence” at a detailed mathematical level, but I’ve never really been motivated to do so, because I don’t understand from high-level explanations of it how it can possibly claim to “solve the measurement problem.” My understanding is that decoherence changes a “quantum-funny entangled system,” after interaction with a macroscopic measuring device, into a classical probability distribution over “system in state A and measurement device measured A” and “system in state B and measurement device measured B”. But in reality we actually only observe either A or B, not both, so it seems to me that the universe still has to “make a choice” at some point between the two possibilites, which is exactly the measurement problem.

Mike, the much-used phrase “the measurement problem” comes into sharper focus if we add some qualifiers — a particularly good qualifier is “the simulation of measurement problem.”

In other words, what is the computational complexity of simulating a measurement process?

Thanks to quantum computing research, we know two things: (1) simulating measurement and simulating noise are two faces of the same computational problem, and (2) simulating measurement processes (which are concentrative metric flows) is computationally easier than simulating Hamiltonian dynamics (which are ergodic symplectic flows).

Pursuing these ideas leads to the Ashtekar-Schilling resolution of the quantum measurement problem: there is no measurement problem because Hilbert-space quantum mechanics is not true. Instead, as Ashtekar and Schilling put it (in arXiv:gr-qc/9706069): “the linear structure that is at the forefront in text-book treatments of quantum mechanics is, primarily, only a technical convenience.”

Now, the physics community has not (so far) embraced the Ashtekar-Schilling approach with any great enthusiasm, but we engineers find it to be very useful. This is because simulation purposes, the linear structure of Hilbert space is a technical

inconvenience(because it has too many dimensions.On the other hand, for the fundamental physicists, a major limitation for the Ashtekar-Schilling approach is that no one (AFAICT) has proposed a causally separable, relativistically invariant, field theoretic description of measurement processes on nonlinear (Kählerian) state-spaces … heck, it’s taken many decades for physicists to carry through this program with linear Hilbert spaces.

However, this limitation is not of great concern to biologists, chemists, or engineers, because in practice we simulate physical systems that are quantum (like molecules), and we simulate systems that are relativistic (like GPS satellites), but seldom do we simulate systems that are both quantum and relativistic at the same time (like LHC collisions).

That’s the short story for why biologists, chemists, and engineers are nowadays (generally speaking) reasonably accepting of the Ashtekar-Schilling approach to low-dimension quantum mechanics and simulation.

I was using the phrase “measurement problem” to refer to the more philosophical issue of exactly what a measurement is and when it occurs, rather than any computational problem of simulating it.

It’s not clear to me why it’s the linear structure that produces this problem. Scott’s original post switched to a stochastic modification of quantum mechanics when he started talking about measurement.

Mike says:

“Scott’s original post switched to a stochastic modification of quantum mechanics when he started talking about measurement.”This reflects the modern formalism that von Neumann-style projective descriptions of quantum measurement are coarse-grained, macroscopic, discrete approximations to processes that (in practice) are fine-grained, microscopic, and continuous.

Nowadays a more common point of view is that measurement devices are machines—designed by either engineers or by evolution—whose purpose is to induce dynamical processes whose coarse-grained description looks like collapse.

The reason that quantum measurement is taught the coarse-grained von Neumann way—as contrasted with the fine-grained process way—is mainly expedience: if undergraduates were first taught the necessary baseline skills in stochastic processes and differential geometry, then quantum mechanics would have to be delayed until the first year of graduate school.

And *that* would be unthinkable! :)

I will add as a coda to the above, that IMHO undergraduate students should *definitely* learn both classical and quantum mechanics from the differential geometry/stochastic process point of view … and should learn them at the same time.

A good example of this approach is Terry Tao’s blog topic this week:

From Bose-Einstein condensates to the nonlinear Schrodinger equation… note that Tao first describes the symplectic and stochastic elements of classical dynamics … and only *then* describes the symplectic and stochastic elements of quantum dynamics … and the latter case does not really require all that much additional mathematical hardware.Furthermore, when classical and quantum dynamics are pulled-back onto lower-dimension manifolds (as simulationists commonly do), then the similarity between classical and quantum dynamics becomes even more striking.

The bottom line: the hours that undergraduate students spend contemplating the mysteries of quantum measurement, perhaps might be better spend learning how to pullback and pushforward dynamical processes.

I must be dense, but I am still failing to extract from what you’re saying an argument for why nonlinearity avoids the measurement problem.

I am also unable to extract much from this description of Kahlerian dynamics.

Just to clarify: are you suggesting (cf. comment 18, paragraph 4) that the experiments we do nowadays are measurably influenced by quantum gravitational effects? If so, then this is definitely news to me.

I have heard people advertise the merits of the Kahler approach, but I’ve never got a simple explanation of how it avoids the complications of quantum simulation.

Suppose I have 100 spin 1/2 particles. According to orthodox quantum mechanics, specifying a state of this system requires specifying a vector in a Hilbert space of dimension 2^100. If you think that density matrices are more fundamental than state vectors, then you have to specify a density matrix, which lives in a space of dimension 2^{200}.

In the Kahler perspective, what sort of object specifies a state of this system? I skimmed the Ashtekar-Schilling paper, but I couldn’t figure out how they handle this sort of entangled states.

Well, there are several questions raised above, and since there are no good textbooks, these questions mostly have to be answered by carefully reading the literature, and working problems oneself (in my own case this is a highly imperfect process).

With regard to Mike Schulman’s

I am still failing to extract from what you’re saying an argument for why nonlinearity avoids the measurement problem.and Scott Carnahan’sAre you suggesting that the experiments we do nowadays are measurably influenced by quantum gravitational effects?, these two questions can be given pretty definite answers by a careful reading of Lane Hughston’s articleGeometry of stochastic state vector reduction(that was referenced in the original post) and a recent short review in Science by Stephen Adler and Angelo Bassi titledIs Quantum Theory Exact?. We will also draw some ideas from a recent preprint by Charles Bennett and colleagues titledCan closed timelike curves or nonlinear quantum mechanics improve quantum state discrimination or help solve hard problems?(BibTeX entries for all three articles are appended).The short story is that Hughston’s 1996 article describes a very specific model of nonlinear quantum dynamics that involves both nonlinear drift terms and stochastic terms. Adler and Bassi’s 2009 review describes work suggesting that this kind of nonlinear quantum theory does not create any causal violations. The work of Bennett et al. further suggests (AFAICT) that this kind of nonlinear quantum theory does not create any “computational extravagances” (and to my mind the phrase “computational extravagances” is a very happy one).

Now, how is it that we have this freedom to (seemingly) modify the sacred tenets of linear quantum mechanics, without creating causality violations or computational extravagances? A general strategy for answering this (tough) question is suggested by Terry Tao’s recent blog post

From Bose-Einstein condensates to the nonlinear Schrodinger equation, which develops the idea that nonlinear quantum mechanical equations can arise (in a mathematically natural way) solely linear quantum mechanical equations.Is there any way that we could similarly obtain Hughston’s nonlinear quantum dynamics, in a mathematically natural way, solely from orthodox quantum mechanics? The short answer is “definitely yes”, if we take “orthodox” to mean “the quantum mechanics of Nielsen and Chuang, Chapters 2 and 8.” In particular, if we adopt the symplectic methods of Ashtekar and Schilling, then the derivation is simple and natural; it is only necessary to assume that physical objects scatter gravity waves, in such a fashion that the local Hamiltonian $H$ is continuously measured with a (one-sided) spectral density $S_H = (\hbar^3 c^5/G)^{1/2}/8$ (the details of the derivation are given in our concentration-and-pullback seminar notes).

At this stage we have *not* solved any problems in fundamental physics … and in particular, we have *not* solved “the measurement problem” … however that may be defined! But on the other hand, we *have* made substantial progress toward solving a major problem in engineering, namely, how can we efficiently simulate (noisy) quantum systems? The answer is simply to model the noise as an equivalent measurement process, unravel the measurement process as a (stochastic) concentration process, and pullback the concentrated quantum trajectories onto a low-dimension (Kählerian) manifold, where they can be efficiently integrated.

To link-up with fundamental physics, we have to tackle David Speyer’s tough question: “[How do we] specify a density matrix, which lives in a space of dimension 2^{200}?”. This is IMHO a wonderfully challenging question, no matter whether we live in a large-dimension Hilbert quantum state-space, or in a small dimension pullback Kählerian quantum state-space. Scott Aaronson and Alex Arkhipov have conceived a way to frame this question operationally that is (IMHO) truly delightful: “Can we conceive practical experiments that sample from a probability distribution that is infeasible to simulate?” And to put some muscle into this question, they suggest an ingenious class of experiments in which sampling the probability distribution (seemingly) requires computing the permanent of a scattering matrix (which is thought to be a hard computation).

So far the Aaronson/Arkhipov protocols are discussed (AFAICT) only in seminar talks by Scott and in the blogosphere (see the discussion in Dave Bacon’s blog

Quantum Pontiffunder the heading “QIP 2010 Speakers”). But it is pretty clear already (IMHO) that Aaronson/Arkhipov approach to thinking about density matrices is important, and that (in the long run) these ideas will substantially impact the way we think about Hilbert versus Kählerian quantum mechanics.So one bottom line (IMHO) is that when Scott Morrison asked

“Is [Kählerian QM] totally ridiculous?”and Greg Kuperberg answered (is #17)“Ridiculous, no. Unnecessary and unlikely, quite possibly., that was a very reasonable question to ask, and a very resonable answer to give.With equal justification, we can ask and answer that question from precisely the opposite point of view:

Is Hilbert space quantum mechanics totally ridiculous? Ridiculous, no. Unnecessary and unlikely, quite possibly.In either case, it seems pretty clear that the coming decade is going to see radical changes in the way we think about quantum mechanics, in the way we teach it, in the way we simulate quantum systems, and in the practical technologies we build using quantum physics principles. GOOD! :)

@article{*,Author = {Lane P. Hughston}, Journal = {Proc. R. Soc. Lond. A, Math. Phys. Sci. (UK)}, Number = {1947}, Pages = {953 – 79}, Title = {Geometry of stochastic state vector reduction}, Volume = {452}, Year = {1996}}

@article{*, Author = {Stephen L. Adler and Angelo Bassi}, Journal = {Science}, Pages = {275–276}, Title = {Is Quantum Theory Exact?}, Volume = {Stephen L. Adler and Angelo Bassi}, Year = {2009}}

@unpublished{*, Author = {Charles H. Bennett and Debbie Leung and Graeme Smith and John A. Smolin}, Note = {arXiv:0908.3023v1 [quant-ph]}, Title = {Can closed timelike curves or nonlinear quantum mechanics improve quantum state discrimination or help solve hard problems?},Year = {2009}}

Okay. So you’re saying that nonlinearity, alone, *doesn’t* solve the measurement problem; you need to introduce something extra like a stochastic modification. Maybe the extra modification is more natual and easier starting from the nonlinear formulation than it is starting from the linear one, but the nonlinearity itself does nothing to solve the measurement problem.

Going back to my original question, it seems as though Urs was saying (in comment 15) that quantum decoherence solves the measurement problem in some sense, which I also don’t understand; I was hoping someone would explain *that* to me.

In a somewhat light-hearted way, there appear to be four very different (but wholly compatible) attitudes regarding “the quantum measurement problem.”

Philosophers are 100% sure there is a quantum measurement problem … but on the other hand, no two philosophers wholly agree on how to resolve that problem (or, arguably, any other philosophical problem).

Most physicists are pretty sure that there is a quantum measurement problem … but it is tough to think of experiments to demonstrate it … and even tougher to *do* those experiments. Perhaps M-theory (or some other yet-to-be-conceived successor to the Standard Model) will help?

For engineers, the main quantum measurement problem is pragmatic: how can we simulate quantum measurement processes efficiently and reliably, with a view toward (what else!) designing efficient and reliable measurement devices?

And for mathematicians, the main quantum measurement problem is (simply) ,how do we get ideas for proving good theorems, by thinking about quantum measurement?

My own opinion is that the above four goals are wholly compatible … and that there ample scope for progress on all four fronts … and consequently, that opportunities for cross-fertilization among these four communities have never been richer than they are right now.

So you’re saying that nonlinearity, alone, *doesn’t* solve the measurement problemMike: A big part of the measurement “problem” is figuring out whether there really is any problem. Again, in the von Neumann algebra formalism, the effect of measurement is exactly the posterior state after you witness a boolean random variable. Just as it was in commutative probability. I assume that you agree that in classical probability, there is measurement but there is no measurement problem. If there wasn’t a problem before, why is there a problem now?

David: Quantum computation exists within quantum probability or quantum mechanics as written down on paper. There is strong evidence that the quantum polynomial time class, BQP, is not contained within any deterministic or randomized complexity class short of EXP. Corollary: Tractable deterministic or randomized simulation is not possible. People should decide what they are trying to do with the simulation question. They either want to truncate quantum probability (say with some kind of statistical censorship law), or they want to prove that BQP = BPP.

I can elaborate a little bit on my answer to Mike:

Theorem (von Neumann): Classical probability embeds in quantum probability as a tensor category. The embedding preserves standard apparatus such as measurement (= the posterior state formula), marginal states, convexity, weak convergence, etc.

Theorem (Bell): There isn’t any interesting tensor functor from quantum probability to classical probability, nor even any approximate tensor functor. The tensor product of two qubits does not even approximately embed in a tensor product of two commutative systems of any size.

The measurement/simulation/realism problem: If we interpret Bell’s theorem as a negative result and von Neumann’s theorem as a side construction, then we can look for a functor in the other direction. It cannot be a tensor functor, but maybe it can be something else interesting.

Again, I don’t want to dismiss work on this problem out of hand. Kahler geometry or some other such approach certainly could be both interesting and useful. My point is that in order to understand this question, you shouldn’t think of it as well-motivated.

Everybody seems to have their own favorite version of quantum mechanics which they’re convinced doesn’t suffer from the measurement problem. (-: Greg, can you explain what you’re saying in a way that will make sense to someone (like me) who is unfamiliar with the von Neumann algebra formalism of measurement?

My understanding is that in classical probability, measurement “occurs” but we don’t have to worry about figuring out what a measurement is or when it occurs, because classical probability is epistemic: there is a “true” state of the world and nontrivial probabilities simply represent our lack of knowledge about it. Perhaps this has to do with the fact that commutative von Neumann algebras is always an algebra of functions? By contrast, in quantum probability one has to identify what a measurement is and when it happens in order to have a complete description of the world, because measurements cause quantum state to collapse. No doubt everyone will say this is a hopelessly naive and out-of-date viewpoint on quantum mechanics that shouldn’t be taught in schools any more, but please *explain* to me how your versions solve the problem.

[...] Quantum mechanics and geometry [...]

In both classical and quantum probability, there is an algebra of bounded complex-valued random variables M. M should be a von Neumann algebra, but for the moment don’t worry too much about the axioms. M has boolean elements; they are just the self-adjoint idempotents. A state (or non-commutative measure) is a linear expectation functional which is non-negative and real on boolean elements.

If $b$ is boolean and is witnessed to be true, then the posterior state is . This formula is the answer that represents what you said, a change in knowledge of the state.

What is different in the quantum case is that when you learn something about a state, you indeed may change it for other observers. There is a hidden measurement operator on states, whereby I measure a state and don’t tell you the answer. Because of non-commutativity, this operator is not the identity. The concept of knowledge changes. You have to accept that there does not exist a consistent underlying knowledge base for all observers.

Classical probability is epistemic in one particular interpretation, the interpretation in which it is an approximation to determinism. However, even in classical probability, the Bayesians for their own reasons do not see it that way. Quantum probability is inherently Bayesian.

Note by the way that I don’t think that this version that I like (which is the operator algebraists’ version) “solves” the measurement problem. The measurement problem is not a rigorous question. Evidence that a non-rigorous question is ill-founded is not a solution.

Greg already replied to this with more technical deatils, but maybe I can still make this vague statement:

decoherence doesn’t make quantum mechanics a non-probabilistic theory.

It does turn pure states that have no interpretation as classical probability distributions into mixed states that do look like an ordinary probability distribution.

That at least explains why highly entangled states are not perceived macroscopically.

It doesn’t, however, provide an interpretation of the classical probabilities that one does give as the ensemble averages over a more fundamental dynamics.

I’m not hearing evidence that the measurement problem is ill-founded. Of course it isn’t a *rigorous* question, but it is a meaningful one. It seems to me that when you say “What is different in the quantum case is that when you learn something about a state, you indeed may change it for other observers” and “You have to accept that there does not exist a consistent underlying knowledge base for all observers”, that means that your description of the physical world has to involve “knowledge” and “observers” as fundamental notions. But can you give a definition of what constitutes an “observer,” and at what point an observer accumulates “knowledge” which causes the state to change for other observers? I don’t mean a mathematical definition, I mean an operational one. Is Schrodinger’s cat an observer? That seems to me to be the crux of the measurement problem.

I want to echo that what Greg Kuperberg said in Post 37, and express (what I take to be) pretty much the same ideas in the language of Kolmogorov/Chaitin algorithmic compressibility.

Alternatively, you can read the following paragraphs as a non-rigorous rationalization whose practical objective is to indoctrinate quantum systems engineering students “Don’t worry about quantum randomness.” :)

“Quantum systems engineers needn’t worry about observers or about measurements, because quantum state-spaces contain no such privileged elements. Instead, Alice and Bob share dynamical processes by which Alice learns about Bob if and only if Bob learns about Alice.”

“Alice and Bob both are changed by the knowledge they acquire, and that is why there is no such thing in quantum dynamical systems as measurement-without-backaction.”

“As for randomness, in the physical world it arises generically, whenever Alice does not know Bob’s starting state (or even her own starting state). In consequence, the data that Alice acquires is to a greater or lesser extent (usually greater) algorithmically incompressible; hence Alice’s data typically appears random (to Alice) in the algorithmic sense of Kolmogorov/Chaitin.”

“Furthermore, for reasons of algorithmic efficiency, quantum simulations throw away so much information about Alice’s state, or about Bob’s state (or about both of their states) that quantum simulations look just as random as quantum reality does.”

Obviously, the above picture is intended to minimize the role of various quantum mysteries … which for quantum systems engineering students (but perhaps not for physics students) is an entirely reasonable attitude.

Well, I’m a mathematician, which is I guess a lot closer to a physicist than to a quantum systems engineer. In particular, I’m specifically interested in what you call “quantum mysteries.”

Oh … pretty much everyone who has ever learned quantum mechanics ends up contemplating quantum mysteries … and it’s always enjoyable to discover how other folks’ mysteries (usually) differ strikingly from one’s own.

Over on Bob Lipton’s blog, when he asked for posts regarding what kinds of results would feature in future FOCS conferences, I divulged my own favorite quantum mystery, in the form of a fantasy FOCS article …

M-Theory is the unique causally separable, relativistically invariant quantum field theory that can be simulated with classical computational resources… and I added “But heck, please don’t ask me to envision what that M-theory might look like.”This particular approach to quantum mysteries was inspired by turning upside-down the above-referenced article by Bennett, Leung, Smith and Smolin,

Can closed timelike curves or nonlinear quantum mechanics improve quantum state discrimination or help solve hard problems?Here the general idea is solve mysteries the Wittgenstein way: by

dissolvingthem … this to my mind is the best approach (when it works, that is). :)But can you give a definition of what constitutes an “observer,” and at what point an observer accumulates “knowledge” which causes the state to change for other observers?I agree that this is a good question. Suppose that you have an idealized quantum system A with some states that you would like to have in quantum superposition; in mathematical language its algebra of random variables might be some matrix algebra . Then any other system whose state becomes correlated with A effectively acts as an observer. Observers don’t have to be sentient. If two states of an atom are used as a qubit , then any of the neighboring atoms could observe it.

In fact, one of the very interesting constructions in quantum information theory is an operational test for whether quantum data has been observed. This test can be used to establish a protocol for secret communications between Alice and Bob, so that they can use quantum probability rather than cryptography to know that there does not exist an eavesdropper Eve.

A cat is a facetious model for any of this. A cat which is alive is in a thermal state with a lot of entropy. It cannot make quantum superpositions because its state hopelessly far away from a pure state. A living cat consists of zillions of tiny pieces that separately observe features of the cat’s state. Another way of saying it is that everything but a small commutative (or classical) subalgebra of the cat’s random variables is washed away by thermal averaging.

A dead cat could be okay, if it is at zero temperature. You could theoretically make a quantum superposition of a frozen cat in two different states, if you then did the extreme amount of work to freeze it really well, and isolate it from all other observers.

It seems to me that the real problem of “measurement” is not the introduction of correlation, but the collapse of that correlation into one state or another.

Now you’re changing the question. You asked me, operationally, who or what counts as an observer. Suppose that I, a human being, observe a quantum system and make its “wave function collapse”. Suppose further that I do not tell you the values of my measurements. Then my effect on your quantum measurements is exactly the same as if I were a wisp of air that just established a correlation with the same quantum system.

After all, we’re all quantum systems that play by the same rules. There is no rigorous distinction between a mindless mass of atoms and a small computer with a brain.

To really understand what I’m saying here, you should learn what a mixed state is in quantum probability. That is what allows you to make a classical superposition (or mixtures) of two quantum states, so that you don’t just have quantum superpositions. Without mixtures, it’s not a full probability theory, and you also can’t write down the effect of hidden measurement.

I don’t believe that I am changing the question, but if it looks to you like I am, then that means my original question was poorly phrased. I assumed that this was “the question” about the interpretation of quantum mechanics that everyone understood, but apparently not. Here’s another way to ask it: how do you account for the fact that I only experience one reality? My state can be as correlated as you like with a quantum system, say the combined state is is (system in state A)(I observe state A) + (system in state B)(I observe state B), but what causes me to only observe either state A or B, not both? (I do know a little bit about mixed states, but it’s not clear to me what they have to do with the question.)

This conversation is moving fast! I’d like to address some things in comments 30-33.

I understand that BQP is a mathematical model which assumes that the standard quantum formalism is correct. In this model, the state of 100 particles is described by a density matrix. And I understand that there is mounting evidence that BQP is not in P. (Including the new result of Aaronson and Arkhipov that I have learned about from this discussion.) Whether BQP is in P is a mathematical question.

On the internet I frequently hear people saying that Kahler quantum mechanics could make quantum simulation tractable, by reducing us to problems on lower dimensional spaces. In this discussion, for example, John says that we can “pullback the concentrated quantum trajectories onto a low-dimension (Kählerian) manifold, where they can be efficiently integrated.” My understanding is that the claim here is a physical claim, not a mathematical one. The claim is that quantum mechanics is not precisely correct (or, at least, its modeling of noise is not realistic) and that a realistic model would not require working on such high dimensional spaces.

So, this is why I asked my question: What mathematical object, in the Kahler model, describes the state of 100 particles? Is it a probability distribution on ? Is it 100 probability distributions, each on a different ? Is it, to make up an arbitrary guess, a density matrix which is within of a matrix of rank ? I don’t think I’ve gotten an answer to this.

These questions are probably orthogonal to the discussion of the philosophy of measurement. I’m just trying to understand what the Kahler model of quantum computation says.

I assumed that this was “the question” about the interpretation of quantum mechanics that everyone understood, but apparently not.Part of the problem is that what a lot of people understand as onequestion splinters into various questions, ranging from very interesting to boring, in quantum information theory. When I said that the the question was ill-founded, part of what I meant was balling all questions together into One Big Question. That isn’t a great way to approach any important topic in science.

Here’s another way to ask it: how do you account for the fact that I only experience one reality?Again, I don’t consider that the same question as which entities, operationally, count as observers. Even so, I can try to answer it. I don’t account for it, and I don’t think that i have to. All that I have to account for is your perception that you only experience one reality. To an extent, it has to do with the fact that for thermal reasons, people are classical computers and not quantum computers. But to an extent, even if you were a quantum computer, you’d still perceive only one reality, although it would depend on what part of your thinking you chose to measure that day.

If that sounds mystical, again, an accessible and important analogy is with classical Bayesianism. In the Bayesian viewpoint, I would describe your state as a probability distribution rather than as one reality. If I sampled that distribution by measuring you, you’d be very likely to tell me that you only experience one reality. I again emphasize that the Bayesians have non-quantum reasons for their interpretation. Also, it may sound like “many worlds”, but that is actually a fairly crude description of the point; probability distributions are not really the same thing as time bifurcation.

My state can be as correlated as you like with a quantum system, say the combined state is is (system in state A)(I observe state A) + (system in state B)(I observe state B), but what causes me to only observe either state A or B, not both? (I do know a little bit about mixed states, but it’s not clear to me what they have to do with the question.)Mixed states are essential for a different part of the story. Suppose that you observe the system without telling me that you did. Then for me, the system has changed from a quantum superposition of A and B to a classical mixture. That is what the hidden measurement operation does. When you said “wave function collapse” before, I had in mind a triangular model of the question: one system with two observers.

@David,

the state of a single particle (with no properties besides spin) lives on . If it’s a pure state, that’s just a point on , while a general mixed state is a probability distribution on .

The state of two particles lives on a . The “unentangled” states are the image of the Segre embedding .

The state of 100 particles is a probability distribution (as usual, just a Dirac measure for a pure state) on .

At least when you’re talking about things “exactly”, the Kahler model is nothing but taking the projective spaces of the usual Hilbert spaces, and these are just tensor products of the Hilbert spaces for constituent parts. Simulating or modelling these things, well, perhaps something different is going on…

Okay, I’m sorry that I phrased my question poorly. Thanks for trying again to answer my new question.

All that I have to account for is your perception that you only experience one reality. To an extent, it has to do with the fact that for thermal reasons, people are classical computers and not quantum computers. But to an extent, even if you were a quantum computer, you’d still perceive only one reality, although it would depend on what part of your thinking you chose to measure that day.Is that supposed to be an accounting for my perception? I don’t see an explanation of it in there.

In the Bayesian viewpoint, I would describe your state as a probability distribution rather than as one reality.What I have learned to call “Bayesian” is the viewpoint that probability measures our degree of belief, rather than being merely a frequency ratio of a large number of trials. Wikipedia gives me no indication that this is wrong. I’ve never heard that “Bayesian” entails a disbelief in an objective reality. But if it does, your argument is more likely to make me disbelieve Bayesianism that make me believe that the quantum measurement problem has gone away.

Wikipedia gives me no indication that this is wrong. I’ve never heard that “Bayesian” entails a disbelief in an objective reality.I concede that Bayesianism does not require you to reject the tenet of deterministic reality. However, it certainly does allow that. If you’re supposed to model your belief by a distribution, then does anything beyond that have to exist?

(Note that “objective” is somewhat different from “deterministic”. In other interpretations of the word “objective”, I do believe in objective reality.)

Certainly the Bayesian viewpoint is that thinking of underlying determinism is a distraction, when it isn’t outright misleading. Quantum probability is Bayesian in the sense that determinism proves to be a much more severe distraction, for both radical new reasons and for the same old reasons. Sure, there could always be some superior non-Copenhagen alternative; it’s fine to look for one. But the existing alternative not only don’t look superior, they don’t even look well-motivated.

In one respect I may be overstating the point. I’m trying to describe the way that I see it and many quantum information theorists see it. It’s a certain viewpoint that is helpful for doing research in this area. Beyond that, I don’t mean to proselytize; you can believe what you like.

Also, I want to give a slightly different answer from Scott to David’s question:

It is true that a pure quantum state is an element of for an -state quantum system. The standard model for a mixed state of the same system is a point (a density matrix) in the convex hull of in a Hermitian-Veronese embedding of in the Hermitian matrices. Of course, a distribution on determines a point in this convex hull. However, in the usual interpretation, the distribution is a redundant description of the density matrix, which is the actual state. (Obviously, Ashtekar and Schilling want to suggest that the distribution on projective space is not redundant.) The density matrix is meant as a quantum replacement for the concept of a distribution.

Notice, by the way, that if you have a classical distribution on bits, then that distribution is also in a simplex whose dimension is exponential. In this sense, the density matrix is not all that different and not all that much bigger.

Certainly the Bayesian viewpoint is that thinking of underlying determinism is a distraction, when it isn’t outright misleading. Quantum probability is Bayesian in the sense that determinism proves to be a much more severe distraction, for both radical new reasons and for the same old reasons.This seems to be a different view on Bayesianism than I tend to take. (For reference: I’m a philosopher who tends to sympathize with Bayesianism.)

It seems to me that the idea of Bayesianism is just that there’s a very useful way to use probabilities to represent epistemic features of a situation, which neither precludes nor requires a use of probability to talk about objective features of the situation. It’s fully compatible with determinism and indeterminism, but if you go for indeterminism, then you’re likely to have some sort of non-Bayesian probability to deal with as well. In some cases talking about the underlying dynamics (whether deterministic or indeterministic) might be distracting, but I think in general it’s probably going to be an important complication to bring into the picture.

However, I suspect this isn’t the notion of determinism that you’re talking about. Instead, you’re talking about some something I would call realism vs. anti-realism, in that it’s a question about whether or not there’s an objective fact of the matter as to whether some claim about the past or present is true, and not just for claims about the future (which is where I would tend to locate disputes about determinism). And in general, I would think that standard Bayesianism almost requires some sort of realism – after all, if some claim might have a status other than true or false, then why should my degrees of belief in it being true or false add up to 1? I should have degrees of belief in each status that it could have.

Anyway, as I see it, quantum mechanics is basically the

onlyreason I can see for anyone to be interested innon-Bayesian probabilities – on at least some interpretations, the probabilities really reflect objective features of the world, and not just the epistemic status of any agent, and this means that they are non-Bayesian (in the sense I’m most familiar with). Of course, some people, like David Deutsch and followers, suggest that even quantum probabilities are Bayesian. And some people think that we need non-Bayesian probabilities to make sense of statistical testing. I suspect that the latter are misguided, but I’m definitely not certain of that.This is one of the best scientific blog topics that I have ever seen! The following little essay will address three points that have been raised, with a view toward agreeing with everyone (since IMHO everyone has been asking sensible questions) while focussing upon the overall question:

“Where’s the [mathematical] beef?”————————–

From 46. (David Speyer):

What mathematical object, in the Kähler model, describes the state of 100 particles?The too-simple answer: curves on joins of multilinear complex algebraic varieties.

Or as these state-spaces are often called nowadays, tensor network states. Or as they are called when they are anti-symmetrized, Grassmannian varieties (via the Pl¨cker embedding) … which are the same as Slater determinants of quantum chemistry …which when algebraically joined become Hartree-Fock state-spaces … Or as they are called when symmetrized (uhhh … boson state-spaces? … permanent state-spaces?). Or as they are generically called when they are constrained by slicing-and-dicing in a generic multilinear way, matrix product states.

The point is that Kählerian quantum state-spaces (aka joins of multilinear complex algebraic varieties) are nothing new … heck … way back in 1928 physicists were using them under the name Hartree product states! That is why Charles van Loan calls them (in an article of the same title)

The Ubiquitous Kronecker Product.So these Kählerian algebraic varieties are not arcane objects that that are going to be used “someday” as quantum state-spaces— they have been the bread-and-butter of practical quantum calculations for more than eighty years.

————————–

46: David Speyer:

Does a realistic model require working on such high-dimensional [Hilbert] spaces?The too-simple answer: In principle always, in practice never.

The key question is, how noisy is the system? If it has zero noise (or the noise is error-corrected), then we can program that computation to visit any portion of the (exponentially large) Hilbert state-space that we want.

But if the system has finite noise (e.g., by being in contact with a thermal reservoir, including a zero-temperature reservoir), then the dynamics are described by drift-and-diffusion processes, and (with a proper choice of gauge) these processes are generically concentrative. In practice this means that there are (exponentially large) regions of the state-space that you can’t visit.

Is there a generic geometry of the manifolds onto which trajectories are dynamically compressed? Yeah baby … it’s those K&amul;hlerian manifolds of the previous question!

Of course … this remains to be proved … but heck … every engineer, chemist, and biologist “knows” that its true.

To speak seriously, thre is very great practical need among engineers, chemists, and biologists for rigorous proofs that concentrative flows onto low-dimension multilinear complex varieties are generically present in noisy quantum systems — these would be a concentrative analog, for metric quantum flows, of von Neumann’s ergodic theorems for symplectic quantum flows.

————————–

47. Greg Kuperberg: What a lot of people understand as one question splinters into various questions, ranging from very interesting to boring, in quantum information theory.

Here I agree with Greg 100% … except that so far, QSE Group has yet to encounter *any* of the splinters that are boring, because it is a very interesting study (in itself) to analyze how each splinter dovetails with all the other splinters (which is how systems engineers have to think).

From a systems engineering point of view, here is the too-simple explanation of how quantum physics works … which turns out to be pretty much the same way that classical physics works (and Terry Tao’s recent blog post

From Bose-Einstein condensates to the nonlinear Schrodinger equationconveys many of these same ideas).We’ll begin at the least abstract level with these three principles: (1) an apparatus is a quantum device that has an input BNC cable and an output BNC cable; (2) a sensor is a quantum device that has an input BNC cable and an output BNC cable; (3) a simulation is a quantum device that has an input BNC cable and an output BNC cable;

Of course, from this point of view a quantum system, a sensor, and a simulation are all really the same thing … yeah, that is what is intended!

So uhhh … what would happen if we connected these three devices in a loop? (which topologically, is all that one can do). And while we’re at it, let’s install a Hamiltonian interaction term that couples the apparatus state-space flow to the sensor state-space flow (we’ll keep the simulation state-space decoupled because, heck, otherwise our computer isn’t portable!). What have we accomplished? In the immortal words of Monty Python’s Brave Sir Robin: “That’s easy! We’ve invented sensing-and-control technology”!

And just to mention, the simulation element is ubiquitously present in modern technologies (systems engineers call this element HWIL == hardware in the loop) because optimal controllers *necessarily* incorporate optimal models of the system being simulated … and these simulations have to run in real-time (which adds to the challenge); this pragmatically accounts for why quantum systems engineers are obsessively interested in efficient quantum simulation.

Of course, we invariably have very different expectations for the performance of each of these three boxes … and for this reason their contents typically are designed very carefully to meet those expectations … and I think folks on this forum can write out for themselves what some of those expectations are.

E.g., one common situation is that the simulation box is designed to be hot with irreversible, digital, Lindbladian internal dynamics (it’s a computer CPU or an FPGA, for example); the apparatus box is cold with reversible, Hamiltonian, analog internal dynamics (it’s a nanoscale cantilever and/or ions in a trap, for example), and the sensor box might be an optical linear amplifier that is as low-noise as feasible (which nowadays approaches Carlton Caves’ 3dB quantum noise limit mighty closely) that is coupled to the apparatus by a fiber optic network that is as near to unitary as feasible (which nowadays is mighty close to perfect). And the would constitute the building-blocks of a quantum spin microscope (for example), or even of an Aaronson/Arkhipov apparatus.

Now let’s describe this apparatus as compactly as possible. One empirical approach is simply to compute a lot of simulated trajectories, and store them in a huge database. Then we can answer any query about the apparatus just by replaying these simulations.

To be candid, that is *exactly* what engineers and biologists usually do … often it is *all* that they do … and usually it is *enough* for them to adopt this simple approach.

It’s true that this approach works well only for noisy systems, or low-temperature systems, or coarse-grained systems, or highly symmetric systems; all of which dynamically compress trajectories onto low-dimension manifolds … but these are precisely the systems that are of greatest interest in engineering and biology.

In consequence, at any given time a real-world molecular dynamics group will have on-hand (typically) 10-100 terabytes of simulated molecular dynamical trajectories—which in practice is enough data to answer any “reasonable” question about the molecules being simulated (via the same mathematical reasoning as the first half of Terry Tao’s

From Bose-Einstein condensates to the nonlinear Schrodinger equation).Of course, mathematicians (and physicists) love to ask-and-answer even *unreasonable* questions, of the sort that biologists and engineers are seldom motivated to ask. This desire leads to the idea of a density matrix, which is a mathematical object that (by mathematical design) encodes the answers to *all* questions, both reasonable and unreasonable.

And herein are the seeds of a cultural clash. For reasons of efficiency, engineers, chemists, and biologists love to simulate trajectories on low-dimension K&amul;hlerian state-spaces—which don’t naturally support density matrix descriptions. For reasons of completeness, mathematicians and physicists love to describe quantum trajectory datasets in terms of density matrices—which are defined only on Hilbert spaces whose dimension is too high to permit efficient simulation.

History has shown (IMHO) that these two approaches are wholly compatible … indeed the “K&amul;hlerians” and the “Hilbertians” have been coexisting peaceably for more than eighty years.

To say it another way, Hilbert’s 1930 maxim

Wir mussen wissen, wir werden wissen(“We must know, we will know” hear Hilbert’s voice here) has for a century been the ideal of (most) mathematicians and (many) physicists; for this goal Hilbert-space methods work particularly well.In subsequent decades Hilbert’s ideal has been evolving (as all ideals evolve over time) … and in the hands of biologists, chemists, and spin microscopists it is evolving into

“In biology and engineering there is noignorabimus!We must see, we will see; and all that we cannot directly observe, we will simulate.”; for this goal K&amul;hlerian methods are well-adapted.That is why (IMHO) the twenty-first century is going to witness tremendous progress on for both the K&amul;hlerians *and* the Hilbertians. There may some arguments and squabbles along the way, but heck, moderate levels of arguments and squabbles are always present in creative mathematical and scientific endeavors … and now that the blogosphere exists, this necessary engagement can proceed with wonderful celerity! :)

————————–

A final remark is that there many quotes by Feynman to much the same effect as the above comments. And these comments are surprisingly long-winded and opaque (for Feynman) … Feynman struggles to find a clear way to express them (which is reassuring evidence that these ideas are tough) …

“It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. An example of that is the Schrödinger equation and the Heisenberg formulation of quantum mechanics. I don’t know why this is – it remains a mystery, but it was something I learned from experience. There is always another way to say the same thing that doesn’t look at all like the way you said it before. I don’t know what the reason for this is. I think it is somehow a representation of the simplicity of nature. I don’t know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing.”

“We are struck by the very large number of different physical viewpoints and widely different mathematical formulations that are all equivalent to one another. The method used here, of reasoning in physical terms, therefore, appears to be extremely inefficient. On looking back over the work, I can only feel a kind of regret for the enormous amount of physical reasoning and mathematically re-expression which ends by merely re-expressing what was previously known, although in a form which is much more efficient for the calculation of specific problems. Would it not have been much easier to simply work entirely in the mathematical framework to elaborate a more efficient expression?”

“Nevertheless, a very great deal more truth can become known than can be proven.” … “I have proven to myself so many things that aren’t true”

—————————

@article{*, Author = {C.F. Van Loan}, Journal = {J. Comput. Appl. Math. (Netherlands)}, Number = {1-2}, Pages = {85 – 100}, Title = {The ubiquitous {K}ronecker product}, Volume = {123}, Year = {2000/11/01}}

The point of many worlds is to reduce the problems of consciousness in quantum mechanics so that they are

no worsethan the problems of consciousness in classical mechanics. Mike Shulman’s question of why does he experience only one world if, allegedly, there are many parallel copies of him, is pretty similar to the question of what happens to experience when you sever the connection between the two hemispheres of the brain – does this change the number of observers?In the Copenhagen interpretation, observers cause collapses, so it is important what counts as an observer. The “measurement problem” that other theories solve is to get the same high-level effects from a theory in which observers are not physically fundamental, either because something else causes collapse or because collapse doesn’t happen. Either way, it doesn’t matter whether we call the cat an observer, because observers don’t cause collapse.

What is an analog of the partial trace operation on denisty matrixes on H_1\otimes H_2 in terms of probability distributions on the corresponding phase space?