I know a few regular readers of this blog have views about the many worlds interpretation of quantum mechanics.   I want to ask a question about what is supposed to be the response to a basic worry about a whole family of approaches.  (David Wallace's recent book would be an obvious case, but so would Sean Carrol's recent contributions.)

 

The many worlds interpretation basically says that whenever you make a "meausurement" in QM, (say you have a particle that is spin up in the y direction and you measure spin in the x direction,) the world contintues to evolve according to the Schroedinger equation, and the only thing that makes it look like the measurement has a determinate outcome is that the world splits into two emergent worlds, with an emergent observer in each one.    The trick of all this, of course, is to somehow explain why there is probability when all of the outcomes are occuring.  One problem I have with all of these attempts to get probability out of  is that they all go like this.

1. Assume decoherence gets you branches in some preferred basis.
2. Give an argument that the Born rule applied to the amplitudes of these branches yields something worthy of the name ‘probability.’

The problem is that these steps happen in the reverse order that one would like them to happen.

Look at step one. Decoherence arguments involve steps

1.a) showing that as the system+detector gets entangled with the environment, the reduced density matrix of this entangled pair evolves such that all the off-diagonal elements get very close to zero,

and

1.b)reasoning that therefore, each diagonal element corresponds to an emergent causally inert “branch.”

But step 1.b is fishy insofar as it happens before step 2. Who cares if the little numbers on the off-diagonals are very close to zero, until I know what their physical interpretation is? Not all very small numbers in physics can be interpreted as standing in front of unimportant things. Now, if we could accomplish step 2, then we could discard the off-diagonal elements, because we know that very small _probabilities_ are unimportant. But the cart has been put in front of the horse. We can’t conclude that the “Branches” are real and causally inert and have independent “obsevers” in them _until_ we have a physical interpretation of the off-diagonal elements being small. But all of these Everettian moves do 1.b first, and only afterwards do 2.

 

Now its true that the fact that the off-diagonal elements are small tells us that the different branches don’t interfere with each other very much in terms of their future evolution. I.e., I could evolve a branch forward in time, and the result is almost completely independent of the existence of the other branches.  But the notions of not very much and almost here are still in terms of small, but physically uninterpreted numbers.

I think what often drives the intuition that it is ok to interpret the small off-diagonal terms as telling you that the brances are independent is that we understand the off-diaganol terms as the "interference terms."     But I think this is smuggling, still, a probabilistic notion.  "Interference" is a probabilistic notion, that we get from, e.g. thinking about "how often" we expect interference to show up in the statistics.

 

OK.  So, this worry is out there in the literature.  What's the response?

 

 

Posted in

12 responses to “Probability in Many Worlds: Is the cart in front of the horse?”

  1. Sorry! - but there isn't. Really, I don't think there's any profound difference here between the role of the Hilbert space metric in quantum physics and, say, the spatial metric in classical physics. Instantiation is always approximate, and we measure tha Avatar
    Sorry! – but there isn’t. Really, I don’t think there’s any profound difference here between the role of the Hilbert space metric in quantum physics and, say, the spatial metric in classical physics. Instantiation is always approximate, and we measure tha

    The objection was originally made by Dave Baker, I think (Stud.Hist.Phil.Mod.Phys 38 (2007) pp.153-169); Adrian Kent also discusses it in Saunders et al, “Many Worlds?” (OUP, 2010).
    The only response I’m aware of in the literature is in my book (pp.253-4):
    SKEPTIC: Isn’t there something a bit circular about your whole position here? First you appeal to decoherence theory to argue that a branching structure is approximately realised — which is to say, is realised to within errors of small mod-squared amplitude. Then you appeal to decision theory, or symmetry, or whatever, to explain why the mod-squared amplitudes in that branching structure are probabilities. But surely the only thing that justifies regarding an error as small if it’s of small mod-squared amplitude is the interpretation of mod-squared amplitude as probability.
    SKEPTIC: But that’s just the same thing again. What makes perturbations that are small in Hilbert-space norm “slight”, if it’s not the probability interpretation of them?
    SKEPTIC: It sounds awfully inconclusive and hand-waving, frankly.

    SKEPTIC: Please don’t tell me that you’re going to claim again that there’s nothing Everett-specific about the problem.

    The only thing I’d add to this, with hindsight, is that I think the usual 2-step explanation (first emergence, then probability) is partly pedagogical. You really want to think of the probabilistic dynamical structure emerging, not of something non-probabilistic emerging and then subsequently getting a probabilistic gloss. (This is something I’ve realised in the last few years thinking about dynamical emergence in quantum stat mech.)

    Like

  2. Eric Winsberg Avatar

    Thanks David! I don’t know how I missed this in the book. (I confess to having skipped over lots of technical parts, but I was pretty sure I read all the interlude/dialogs.)
    But I guess I don’t quite understand the last part. (and this is from someone who is not entirely unsympathetic to the “you have that problem too” when it comes to probability. Probability is a mysterious thing on all interpretations.)
    In classical physics: I measure distance with rods and such, and I try to build an observable and measurable feature of the world into my theory from the ground up. If some effect in a classical theory is said to be “small” this claim is usually (always?) made with some physical interpretation of the small number in mind. If I throw out some nth order effect because i calculate it to be small, I do this with an idea in mind of what the small thing is and why I dont care about it.
    But suppose the world really is just one unitarily evolving state vector. What reason do I have to believe that in such a world, observers in their own little worlds would emerge that were causally isolated?
    Im also not fully getting the comment made in hindsight. I thought probability, on your view, was something to be explained in terms of the rationality of the emergent agents. so, I think you do first need the emergent agents, and only then can you get the probabilities.

    Like

  3. Sean Carroll Avatar

    Hi Eric– I think this is mixing up a real (i.e., not-solved-in-my-mind) problem with a less-real one. The less real problem is “How can I ignore small numbers in a density matrix without a probability interpretation?” I don’t think EQM is that different than classical mechanics in the way you suggest. The formulation of the theory certainly requires a norm on Hilbert space. Therefore, I think it makes perfect sense to say that the evolution of a branch is insensitive to certain small numbers, because we have quantitative ways of characterizing that smallness. But those ways have nothing to do with a probability interpretation; I think it’s fair that that comes later.
    But in your last comment I think you hit on something quite interesting and challenging: if the world is just an evolving state vector, how do we factorize it into subsystems? E.g., why can’t we just write the whole wave function in an energy eigenbasis, in which different basis states never evolve into each other, and basically nothing happens?
    I’m not completely settled on an answer to that myself, but I think it will depend on the specific state of the universe as well as the specific Hamiltonian, as well as some clever approximations. I.e. you will want to say “Given this state and this Hamiltonian, it is well-approximated as a set of causally disconnected systems with the following weak interactions…” Clearly that would be work, starting from scratch. But it’s not so different from what we actually do when describing the real world.

    Like

  4. Eric Winsberg Avatar

    Ok. I think I can accept the idea that the Hilbert space has a natural notion of insignificance built into it. This passage of David’s helps me the most: ” Small changes in the energy eigenvalues of the Hamiltonian, in particular, lead to small changes in quantum state after some period of evolution. Sufficiently small displacements of a wavepacket lead to small changes in quantum state too.”
    But I’m not getting the “classical mechanics has this too” point. I come to classical mechanics with a basic understanding of the important measurable quantities that I want the theory to represent for me. I don’t read them off the theory in the same way I do from the hilbert space representation.
    Finally, Sean: let me see if I understand you last point in connection with the passage i quoted from David. Is the idea that if there is a eigenbasis in which the basis states never evolve into each other, i.e., in which “nothing happens,” then all of David’s claims become moot? Small changes don’t lead to small change because nothing leads to anything. Or was that only part of the worry or none of the worry? Or, reading again, is the worry that decoherence, with its system+detector vs environment, requires the world to evolve in some way?

    Like

  5. David Wallace Avatar

    Eric, re the classical/quantum issue: I think my sense of classical mechanics is a bit more abstract than yours might be. If I think about, say, classical field theory, or the dynamics of general relativity, I don’t have any pre-theoretic sense of what should be large or small: I expect to learn it from the dynamics.
    Whether that applies to even “mundane” classical mechanics turns (I think) on a quite deep question of metaphysics. My take in general is that we start with some baseline theory understood just in terms of structure and dynamics, and we have to recover macroscopic properties, insofar as we can, emergently and approximately from that theory. Our epistemology then has to be derivative on our own status as emergent, approximate things, and so is itself going to have to rely on whatever sense of significance/insignificance the dynamics gives us.
    If you start instead with (say) Allori et al’s/Maudlin’s conception of primitive ontology, where we’re just assumed to have epistemic access to the locations of (some) matter, I think things would work out quite differently. I have problems with that approach conceptually, but more fundamentally I just don’t have any confidence that physics is going to give us anything like that at the ultimate bedrock level (even putting Everett aside, insofar as I can separate Everett from the ontology of modern physics).
    (Most of this is blog-level thinking-aloud – don’t take it as desperately well thought out. Incidentally, Bacciagaluppi and Ismael have a very nice (in every sense!) review of my book coming out soon in Phil.Sci. that’s good on this “deep metaphysical question” thought.)

    Like

  6. Eric Winsberg Avatar

    Thanks, David. That helps. I certainly agree with this: ” I just don’t have any confidence that physics is going to give us anything like that at the ultimate bedrock level (even putting Everett aside, insofar as I can separate Everett from the ontology of modern physics).”
    I guess I did make it sound like I was vying for primitive ontology in my post above. Maybe what I should have said is: it seems like in classical mechanics, we can calculate what variables, if they have a small value in the theory, will result in small observable effects to us as observers. We can do this without requiring the theory to, as you say , recover for us “our own status as emergent, approximate things.” In anything other than MWQM, I know exactly what sort of thing I am, independent of the theory. I know what my observational capacities are, etc. I dont need the theory to recover that for me. But in MWQM, all that is up for grabs, because I now longer secure in knowing what sort of thing I am according to the theory.
    In any case, I think I’m reasonably happy with the previous reply in the dialog, so maybe this doesnt matter that much to me.

    Like

  7. David Wallace Avatar
    David Wallace

    “In anything other than MWQM, I know exactly what sort of thing I am, independent of the theory.”
    That’s not obvious to me. I can understand “what sort of thing I am” in two ways:
    (I) what sort of thing I am, stated in ordinary pre-theoretic language (or the language of higher-level theories, I guess).
    (II) what sort of thing I am, stated in terms of baseline physics.
    Everett gives the same answer to (I) as any other theory – precisely because I is independent of lower-level theories. There is then a question of how to translate pre-theoretic or higher-level language into baseline physics, but that’s not specific to Everett so far as I can see (at least, unless we make primitive-ontology moves).
    (II) doesn’t actually have an answer in any physics earlier than QED, because in no previous theory can entities anything like us actually be represented. (No stable matter or working chemistry in classical physics, no light in NRQM…)
    The reason that “in classical mechanics, we can calculate what variables, if they have a small value in the theory, will result in small observable effects to us as observers” is, so far as I can see, just because we’re helping ourselves to certain relations between observations and the physics. It’s not because the physics is adequate to actually model the process of observation.
    (There’s obviously more here to work out, and plenty I’m shaky on – clearly “modelling ourselves in the theory” is not in fact how any scientific theory has made contact with experiment in practice. At a first guess, I think there might be a bootstrap going on – make a posit about what counts as a small disturbance or a good approximation, build your model of observers with respect to that posit, and then confirm that their actual observational capabilities are such that the original posit is confirmed.)

    Like

  8. Eric Winsberg Avatar
    Eric Winsberg

    Ok. You are extremely good at playing the “there’s nothing Everett-specific about the problem.” I’m probably going to lose at this, (which is totally fine–I’m just trying to sort out my views, not to dig in) but here goes:
    I guess what I had in mind has three parts.
    (1) An understanding of what sort of thing I am, stated in ordinary pre-theoretic language and/or the language of higher-level theories.
    (2) The absence of a principled reason, from the point of the baseline physics, for thinking that the understanding I had in (1) is going to be radically revised.
    (3)This allows me to calculate, using some kludged, cobbled together, but not in principle obviously misleading, combination of the baseline physics and the stuff in (1), the following thing:
    I can figure out which variables, if they have a small value in the theory, will result in small observable effects to us as observers.
    What seems different about EQM is that it undercuts this strategy, because it violates (2). It tells me that my pretheoretic (or higher level theoretic) understanding of myself is wildly wrong. And moreover it tells me that before I can figure out what to replace that with, I have to settle the question about what is emergent, and hence I have to settle the question of which values in the baseline theory can be ignored because they are small. So, the circularity problem re-emerges in a way that it doesnt in any other baseline physical theory.

    Like

  9. David Wallace Avatar
    David Wallace

    That’s a helpful way of putting it. In that framework, I’d say that EQM is fine with (2): that is, I don’t take the Everett interpretation as requiring any radical revision of my (1)-type understanding of myself. Ordinary-language, and ordinary-higher-level-theory, claims, basically come out as true on my reading of the Everett interpretation. What I’m wrong about is (i) various beliefs about the wider universe (that it’s much smaller than, and has many fewer versions of me in, than in reality); and (ii) various baseline theories as to why the account in (1) is true. So a statement like “according to quantum theory, we’re constantly splitting into myriad copies” is analogous to “according to chemistry, we’re made of myriad impossibly tiny pieces” or “according to astrophysics, we’re really whizzing through the solar system at incredible speed”.
    Here’s another analogy. Plausibly on the “understanding” required by (1) is an understanding of ourselves as rational, cogitating, freely-acting agents. According to neuroscience and microbiology, we’re really an alliance of vast numbers of individual replicating systems bound together by unfathomably complex biochemical and neural processes, and the apparent “decisions”, “actions” and “thoughts” we take are really just a playing out of blind laws of physics and chemistry among various neural and endocrinal pathways. People could think – people have thought – that this means that we do have principled reasons for thinking our understanding of ourselves has to be radically revised. But I take it the mainstream naturalistic position is that our antecedent understanding was basically right – it’s just that our antecedent guesses as to the supervenience base and emergence relations turned out to be too unimaginative.

    Like

  10. Eric Winsberg Avatar

    But aren’t we back into circularity? You wrote “according to quantum theory, we’re constantly splitting into myriad copies” but you don’t get that UNTIL you’ve got emergence out of decoherence. but you don’t get emergence out of decoherence until you explain why the small numbers dont matter, and you dont get that (at least not the same way you get in non-EQM and other theories) until you have 1, 2 and 3.
    In short, my worry is that prior to an emergence result you don’t have 1,2 and 3. Prior to an emergence result, I don’t split into myriad copies, I blur into something that looks nothing like a self or any number of copies of one.

    Like

  11. David Wallace Avatar
    David Wallace

    Sorry, only just saw you replied.
    Let me put it this way: what is the “principled reason” to think that Everett undermines (1)? I’m worried that it’s some pretheoretic intuition about Everett rather than something that genuinely distinguishes it from other theories.
    (If you’d rather leave this for another time/discussion that’s fine – it’s become a rather slow motion discussion and the blog has moved on!)

    Like

  12. Eric Winsberg Avatar

    Here’s what I had in mind. Remember the topic is whether there’s anything Everett-specific about the problem of justifying a norm of “ignorability” on one’s configuration space, or whatever.
    Here’s what I claim a 19th Century scientist could do that an Everettian, can’t, in principle, do, in the logical order in which it appears below.
    (1)Write down what sort of thing I am, stated in ordinary pre-theoretic language and/or the language of higher-level theories.
    (2)calculate, using some kludged, cobbled together, but not in principle obviously misleading, combination of the baseline physics and the stuff in (1), which variables, if they have a small value in the theory, will result in small observable effects to us as observers.
    (3)Use what I get in (2) to justify a norm of ignorability.
    The reason is that the Everettian can’t do 2 prior to 3. Prior to having a norm of ignorability, the Everttian has no emergent observers, and hence can’t kludge her pretheoretic/higher-theretic understanding of what an observer is with the QM understanding of what an observer is, because in pre-norm-justified EQM there is only a blurred out wave function with nothing resembling observers in it.
    I will grant you two things:
    (1) There is some intuition mongering there, but its not about Everett, its about what I can kludge to ordinary classical theories. But it did certainly always seem that this was the case before Everett came along.
    (2)None of this affects your simpler argument that the norm according to which small off-diagonal elements can be ignored is “natural. It only affects the “there’s nothing Everett-specific about the problem” argument.

    Like

Leave a comment