## Uncertainty in the Many Worlds Theory

September 2, 2008I’ve been thinking a bit more about probability in the Everettian picture, and I’m tentatively beginning to settle on a view. Obviously, there is a lot literature on this topic, and, as always, I have read very little of it.

There are roughly two problems. The *incoherence problem* is: how do we make sense of probability at all in the Everettian picture. Firstly, there is no uncertainty because we know all there is to know about the physical state of the world – roughly, everything happens in some branch. Secondly, how can there be probability if there is no uncertainty? The second problem is the *quantitative problem*: how do we get the probabilities of a branch happening to accord with the Born rule. A lot of progress has been made on the second problem (e.g. Deutsch, Wallace) by factoring probabilities out of the expected utility equation from decision theory, but work is still needed to answer the incoherence problem (after all, decision theory only tells us how to act when we are *uncertain* about the future.)

There are several ways to respond to the incoherence problem. We can deny the connection between uncertainty and probability (Greaves), or we can try and make sense of subjective uncertainty even when we know the complete physical state of the world. Here are some options:

**Self-locating uncertainty**: even if you know everything there is to know about the physical state of the universe, you can still be uncertain about where you are located in that world (for example, in Lewis’s omniscient God’s example.) (Saunders & Wallace.)**Caring measure**: there is no subjective uncertainty when you know the complete state of the world, however you can make sense of decision theory by interpreting probabilities as degrees to which you care about your future selves. (Greaves.)**Branches as possible worlds**: if the Everettian treats branches as possible worlds, they’d be in a similar situation to a modal realist. You know everything there is to know about the state of the possibility space, yet you can still be uncertain which world actually obtains. On this view precisely one of the branches is actual, and our uncertainty is about which one this is. (From the comments of my last post, it seems that Alastair holds the branches as worlds view – but I don’t know what he’d make of this interpretation of probability, or the idea of a single actual branch.)**Uncertainty due to indeterminateness**. This is the view I’m toying with at the moment. Here’s the analogy: we may know everything there is to know about the physical state of the world, including how many hairs Fred has and other relevant facts, but we may still be uncertain about whether Fred is bald. This is because it may be*vague*whether Fred is bald.

**Self-locating uncertainty. **The rough idea for option (1) is to treat people, and other objects, as linear (non-branching) four dimensional worms. The branching structure of the universe ensures that, if people can survive branching at all, they overlap each other frequently in such a way that, before a branching, there will be many colocated people, that share a common temporal part until the branching. To see how self-locating uncertainty arises out of this, consider Lefty and Righty – two colocated people who will shortly split along two different branches. Since Lefty is in an epistemically indistinguishable situation from Righty, he should be uncertain as to *which* person he his, even though he knows every de dicto fact about the world. These cases are familiar. Take Perry’s example of two people lost in a library. They both have a map with a cross where each person is, and they know all the physical facts about the library. But they may still be uncertain as to which cross represents *them* (provided they are both in rooms that are indiscernable from each other from the inside.)

This certainly provides a solution to the incoherence problem, but I can’t see how it will extend to an answer to the quantitative problem. My worry is based on a principle due to Adam Elga: you should assign equal credence to subjectively indistinguishable predicaments within the same world (see my earlier post.) The temporal slices of Lefty and Righty at a time just before the branch are identical, so they must be in the same (narrow) mental states, and thus must represent subjectively indistinguishable predicaments with in the same world. So according to Elga’s principle, Lefty must be 50% sure he’s Lefty, and 50% sure he’s Righty, and similarly for Righty. After all Lefty and Righty are receiving exactly the same evidence – even if God told Lefty he was Lefty, he should remain uncertain because he knows that Righty would have received exactly the same evidence, and it could easily have been him that is wrong. In essence, the problem with self-locating uncertainty is Lefty should not proportion his credences to the Born rule, but to the principle of indifference – for that is the principle according to which you should proportion your self-locating beliefs. (Note also that the Born rule, and the principle of indifference appear to be incompatible on first looks.)

**Caring measure. **On this view there is no subjective uncertainty to be made sense of. Rather than many colocated worms, you can just have one person stage with multiple temporal counterparts. However, one can still make sense of probability in terms of the degrees to which you care about each of your branches. Expected utility is really just the sum of the utilities of your branches proportioned to how much you care about each of those branches. Unfortunately this requires treating setting your caring degrees according to the Born rule as a primitive principle of rationality.

This aside, my main problem with this approach is that I can’t see how it gets us the statistical predictions that quantum mechanics makes. The relation between frequencies, chances and credences is clear to me, but I can’t see how the caring measure will explain the statistical data. (You might think there is also a problem with indifference, because you should care just as much about all your branches – I’m not so convinced by this version of the principle though (anyone seen ‘The Prestige‘?))

**Branches as worlds.** Although this initially looks like probability will be just like probability for the modal realist, it is not so simple. For the modal realist there is exactly one actual world, and uncertainty is just uncertainty about which world is actual, even though no-one is uncertain about what the *whole* possibility space looks like. However, it is not analogous – for the modal realist the actual world is specified indexically as that maximally connected region of space-time are a part of. For the Everettian, no such specification is possible since all the worlds are connected, and we overlap multiple worlds. The only way would be to take being the actual branch as metaphysically primitive – which does not sound attractive to me at all.

**Uncertainty due to indeterminateness. **Like the self-locating belief proposal, this proposal tries to make sense of uncertainty even when the agent has a complete physical description of the world at hand. Here’s the analogy: I may have a complete description of the world, down to the finest details of, including the number of hairs on Fred’s head, the way we use English, and still be uncertain as to whether Fred is bald, if Fred is a borderline case of being bald.

Will this analogy carry over to talk about the future in EQM? Why are sentences about the future indeterminate in branching time, rather than always true, (or always false, or whatever.) Here’s why. Our everyday time talk has a temporal logic of linear time, for example we find ‘tomorrow p and tomorrow ~p’ inconsistent, and so on. (This might be because our histories are always linear?) Thus, supposing our tense talk gets given a Kripke frame type interpretation, this frame must be a linear order. However, there are many different (maximal) linearly ordered sets of times for our temporal talk to latch on to – each branch will do (note: I’m not assuming the quantum state is fundamentally cut up at the world joints, nonetheless, these world things make better interpretations than the gruesome none-worlds.) Since there are many candidate linear orders to make our tense talk true, we can supervaluate them out, but keep our ordinary temporal logic without having to select a special branch. On this supervaluationist approach, many sentences will come out indeterminate – which allows us to assign them non-trivial probabilities, even when we know that at the metaphysical level (and in the meta-language for English) all the possibilities are actualised. But this is in just the same sense as we know that some admissible interpretations of English make Fred bald, and others don’t.

Of course, I haven’t said anything about the quantitative problem. It’s not clear that you can just lift the decision theoretic answer and combine it with any old answer to the incoherence problem. This is for the same reason the self-locating uncertainty proposal failed: there may be other principles that govern the evolution of credences in self-locating propositions/credences in indeterminate propositions/whatever your answer to the IQ is, that override the probabilities we need for quantum mechanics.

Posted in Formal epistemology, Metaphysics **|** Tagged David Deutsch, Decision theory, Everett, Indeterminacy, Many Worlds Theory, Probability, Quantum mechanics **|**

Gosh. Lots and lots of stuff in here, it’ll take me a while to absorb it.

For now, a clarification; my version of view 3 does involve the indexical criterion of actuality. You say that the Lewisian criterion of spatio-temporal connectedness is unavailable to the Everettian – this may or may not be true depending on how we treat spacetime in Everett. I talk about this a bit in this post: http://mrogblog.wordpress.com/2008/07/30/spacetime-in-everettian-qm/

But even if the spatiotemporal connectedness criterion for unifying worlds is unavailable, I don’t see why you think no other criterion is possible? For example, we might try a causal criterion, taking causality to be an emergent branch-bound relation. I agree that taking actuality as metaphysically primitive is unattractive and goes against the spirit of EQM. But that’s not the plan.

By the way, I’ve been pushing this ‘Everettian modal realism’ for years now! See http://philsci-archive.pitt.edu/archive/00002635/ for an old presentation of the view – though be warned, it’s embarrassingly glib in places.😦

Another thing: Greaves has the following paper which I think replies to your worries about confirmation on her view:

by Alastair September 2, 2008 at 7:39 pmhttp://philsci-archive.pitt.edu/archive/00002953/

“On this supervaluationist approach, many sentences will come out indeterminate – which allows us to assign them non-trivial probabilities, even when we know that at the metaphysical level (and in the meta-language for English) all the possibilities are actualised.”

This doesn’t seem clearly true to me. There’s at least some reason to think that when we know p is indeterminate in a supervaluational way, we should have credence 0 in p (e.g. p and ~Dp are mutually inconsistent for the standard supervaluationist. If you have credence 1 in one member of an inconsistent pair, you should have credence 0 in the other. Thus the “rejectionist” conclusion that you should have credence 0 in p).

I don’t think this is knock-down—but I do think we can’t simply assume that accounts of indeterminacy are all going to be compatible with uncertainty in the way you sketch.

by Robbie September 3, 2008 at 9:06 amThanks for the comments!

Al,

I can see how, putting it all together, you can evade the problem from indifference principle. But I just can’t help feeling that, even if they’re not world-mates, pre-measurement Lefty and Righty stand in the right kind of relation to warrant dividing credences equally between their predicaments. I’m having difficulty verbalising this though.

BTW, thanks for the reference to the Greaves paper. I had a feeling that something must have been said about that. I’ll have a look at your paper too when I get time (I’m reviewing stuff for the grad conference atm though.)

Robbie,

Thanks for the comment. I realise that there are lots of different ways of dealing with credences of indeterminate sentences – some of which rule out non-trivial probabilities for logical reasons alone, such as yours. I was really starting with the intuition, which at least I have, that if you’re certain something is vague, then you should be uncertain whether its true.

If you’re a many valued theorist or a ‘truth=supertruth’ supervaluationist, then being certain it’s borderline is being certain that it’s neither true nor false. I find both of these views unacceptable for independent reasons.

For example, what you call ‘standard’ supervaluationism (which I assume involves at least: (1) truth=supertruth, (2) validity=preservation of supertruth, (3) ‘determinateness’ is an operator not a predicate) doesn’t seem to me to be a theory of semantic indeterminacy at all. For them, the semantic value of a sentence is perfectly determinate: it is a function from precisifications to truth values.

But just a few notes on your example. By indeterminate I don’t just mean ~Dp, but ~Dp^~D~p. So if you find p and ~Dp inconsistent (which I find hard, because their conjunction is consistent – there are models where it’s not superfalse) then you would equally find ~p and ~D~p inconsistent. So knowing that p is indeterminate means that cr(p)=cr(~p)=0 which no standard probability function can have.

Secondly, if the indeterminacy is about the future, as in our example, you quickly begin to violate temporal rationality principles like reflection (as if the probability axioms weren’t enough.) For example, you know your credence in heads after the fair coin has been flicked and before you’ve seen it, will be 1/2, yet before it is flicked you keep your credence in heads at 0.

Finally, it’s hard to see how your credences are going to be a guide to rational action on this view. For example should I pay 40p for a pound return on a fair coin yet to be tossed? How should I respond to diachronic Dutch books, etc etc…

by Andrew September 3, 2008 at 2:55 pmHi Andrew,

On your notes: The probabilities are going to be non-classical—Dempster Shafer theory gives one formal working out that at a first pass fits nicely with the standard supervaluation route. I have to say I haven’t thought a great deal about updating constraints, but we do have reasonably familiar formal models to work with of these non-classical credences synchronically.

On “inconsistency”–what I meant by that was what gets expressed by “p,~Dp|=”. On global characterizations of validity, that says that there’s no model where the premises are supertrue. That there are models where they’re not both superfalse shouldn’t come into it (or am I missing something?). Of course, you could (a) work with a different characterization of validity, (b) deny that validity in the sense in play here norms belief in the standard way. That’s why the argument for rejectionism isn’t knock down, even given (1-3).

I agree that it’s very far from clear how all this should fit into to credences as a guide to action. When I was first thinking about these issues, my reaction was to treat it all as a reductio of using supervaluationism to deal with indeterminate future contingents (or at least, using supervaluationism and also hanging on to standard truth-logic and logic-belief connections). If that were right, it’d anyway be kind of interesting, since supervaluationism really is a fairly standard device (e.g. it’s what Thomason proposes to use in his famous 1970 article, it’s still in play in the contemporary MacFarlane stuff, etc).

I’ve come to think there’s a little more wriggle room available than this, but not terribly much. FWIW, I’ve just posted a paper that goes through some of the issues:

http://www.personal.leeds.ac.uk/~phljrgw/wip/AristotelianIndeterminacyOpenFuture2.pdf

As I say, using this kind of rejectionist indeterminacy to deal with future contingents looks (to say the least) really problematic. But the moral of that might well be that we really shouldn’t think of future contingents as indeterminate at all. Lots of the most pressing problems (the rational-action connection for example) don’t seem to me to have analogues for other applications of indeterminacy.

Just on the more general issue. It really isn’t clear to me what theories of indeterminacy really fit with what the open-future-guy needs. I take it that in particular, we need to allow that we can have arbitrarily high confidence in p, when p is something we know to be a future contingent, and so indeterminate. At the limit, suppose there are infinitely many branches, including both p and ~p ones, but the ~p ones are measure 0 (cue your favourite example about possible but probably zero outcomes). So actually, if you were right—if knowing that p is indeterminate meant we had to be uncertain about whether p—then the view that future contingents are knowably indeterminate looks in trouble too.

So it’s not just that we need a theory of indeterminacy that allows us to be uncertain about p when p is certainly indeterminate. It’s that we need to allow also for subjective certainty that p when p is certainly indeterminate. And that can start to sound really weird. There are some views that do allow for it—epistemicism is one, degree theory another (that’s the one that’d I’d choose for the open future guy if you put a gun to my head). But I’m inclined to think that the options are pretty limited. But maybe I’m overly pessimistic!

by Robbie September 3, 2008 at 8:57 pmHi Robbie,

Thanks – that’s given me a lot to think about.

I’m not at all familiar with Dempster Shafer theory. However, I think it *is* very important that credences figure as a guide rational action – indeed, maybe we should even take this to be constitutive of credences. This might then give us some insight on how to think about credences of borderline sentences on a supervaluationist framework. If we know the agents preferences between various consequences an action will have with respect to a precisification, then we can use this to get a credence function. (For simplicity I’m here assuming, for the moment, that the agent knows everything about the state of the world. The only uncertainty is due to indeterminacy.)

Another way to do things in a supervaluationist setting is to have vague probabilities – sets of probability functions. Since, with respect to a precisification, we can make sense of an agents credence in a sentence the ordinary way (i.e. just in terms of his beliefs about the world), things generalise quite nicely. So although I don’t endorse this second route, there seems to be lots of options for the supervaluationist – I think it would be premature to take this as a *problem* for supervaluationism.

I like the point about certainty in indeterminate propositions. One thing to note, in the case of vagueness, is that if certainty implies determinate, we can never be certain that something is indeterminate because of higher order vagueness.

But I don’t think (I hope) that I said you *have* to be uncertain in p, when you know p is indeterminate. Indeed, I think both of the two options I gave above allows for certainty in indeterminate truths. For example, suppose I’m throwing a dart at the real interval [0,1] and p is “it won’t hit 0.5 or John is bald”. On each precisification, v, my credence in p-on-v is 1 so my vague credence is certainty in p (and, I think, you can work a similar thing for the decision theoretic proposal.) But if it *does* hit 0.5 p is borderline.

Concerning the inconsistency thing, let me lay my cards on the table – I think the ordinary modal logic definition of validity is the correct (i.e. preservation of truth-at-a-precisification) and that this is the consequence relation that norms the beliefs of agents who know the logic of indeterminacy. But even ‘global consequence’ is massively ambiguous. Here are some things could mean

1) For every model: if everything in is supertrue, then is supertrue

2) For every model: if nothing in is superfalse, then is not superfalse

3) For every model: if everything in is supertrue, then is not superfalse

4) For every model: if nothing in is superfalse, then is supertrue

5) For every model: if for each precisification, v, everything in is true-on-v, then is supertrue

6) For every model, for each precisification, v, if everything in is true-on-v, then is true-on-v

Like I say, I favour 6) – but I think the choice between 1-6 (and the many others I haven’t listed) is far from trivial. I think we’ve discussed this before, but there’s a natural sense in which p and ~Dp are consistent – if we reinterpret D as ~ and p as the true we get a true sentence. It seems plausible to me that it is this Tarskian consequence relation that, in the widest sense, norms belief – after all, why should an ideally rational agent know about the logic of indeterminacy. (Analogy: the S4 axiom for ‘Fred believes that’ is is part of the logic that norms the beliefs of the class of agents that know Fred has perfect introspective powers, but why should it norm the beliefs of arbitrary agents.)

by Andrew September 3, 2008 at 11:24 pmI’m certainly a fan of using decision-theoretic stuff to get a fix on what actual credences are. And I like the methodology of figuring out this sort of stuff, and then using that to figure out what views of indeterminacy are compatible.

The belief relative-to-a-precisification thing strikes me as a nice way of articulating one way that people are apt to think about indeterminacy. And as you indicate, if you go for something like a local characterization of consequence, and don’t identify truth with supertruth, you’ve got what looks to me like an internally coherent package. (Whether it deserves the name “supervaluationism” is another matter—but it’s certainly in the ballpark of what people have been thinking about under that heading).

I don’t think that the fuzzy-credences model suits the open-future case at all though, since it’ll give us indeterminate credences in settings where what we want (by decision-theoretic criteria) are precise credences. E.g. in the case where measure 1 of the branches are p-branches, and one branch is a ~p-branch, our credence will end up being indeterminate between credence 1 and credence 0, whereas we want it to determinately be credence 1, I take it.

Another option that sometimes gets talked about is the idea of conforming your credence in p to the proportion of precisifications on which p is true. So if you know that p is true on 4/5 of the precisifications, but false on the others, you’ll

know that p is indeterminate, but also, on this approach, have credence 4/5 in p. I think this is pretty much tantamount to a supervaluational degree theory, which as I mentioned before is what I’d be inclined to go for in this setting—degrees of determinacy (or truth) of p fixed by truth at a proportion-of-precisifications.

What I feel very uncomfortable with is what one might call the “no constraint” view. This would go for local consequence (your 6), not identify truth=supertruth, not give any degree-theoretic-style explanation connecting up credences and determinacy. My worries about it connected to worries about taking indeterminacy to be primitive, which we’ve talked a bit about before. The sort of questions I’m inclined to ask are things like: if indeterminacy works like this, why is “it’s indeterminate” a relevant thing to answer to someone who asks whether p is the case? What p-relevant info does it convey? I’m kinda thinking that we need to say *something* about the p-relevance of indeterminacy judgements in order to actually have given an account of indeterminacy at all. But then we’re forced into more detailed accounts, of which rejectionism, the vague credences view, and degree theory are three illustrations.

Re all the possible alternative definitions of consequence. I see there are lots of things to try out (though you gotta admit that the one I chose is the most commonly invoked!). If we’re thinking of them as candidates for norms for belief, then some of them will look pretty crazy (and for general reaons, not for reasons tied to the particular case of the open future). (2) gives you something like (standard) subvaluational consequence, right? A and ~A might each be non-superfalse, but their conjunction is superfalse. So conjunction intro won’t be valid, as per the subvaluational result. If this logic normed belief, then we could be certain of two conjuncts, and also totally reject their conjunction. That seems to me adequate grounds for rejecting that proposal.

One thing that gives me confidence that supertruth preservation is not an arbitrary choice among these is that there are other routes from truth value gaps to rejectionism that don’t trade on choice of consequence relations, and the convergence impresses me. E.g. suppose you think that credences are good to the extent that they minimize inaccuracy wrt truth values (i.e. we should seek to minimize the difference between our credence in p and the truth value in p—understood as 1 if p is true, 0 otherwise). Then if p is indeterminate=not supertrue, we minimize inaccuracy by setting getting credence 0.

Apologies! I think I’ve kinda hijacked your interesting thread on QM stuff with my particular bugbears. Thanks for tolerating…

by Robbie September 4, 2008 at 1:22 amOne thing Saunders and Wallace would definitely say about your objection to the ‘self-locating uncertainty’ view is that the application of the principle of indifference rests on a false presupposition – that the number of branches resulting from a quantum interaction is a well-defined quantity. This presupposition fails as a consequence of using decoherence to solve the ‘preferred basis problem’.

There’s no determinate fact of the matter about whether there are two of lefty/right, or three, or five million – how many branches emerge from an interaction is, very roughly speaking, interest-relative; it depends on the basis we use to decompose the state, which will depend on which features we’re interested in. There’s no ‘one true basis’, so no one true number of branches emerging from an interaction. So the assignation of probabilities by counting branches and applying indifference can’t even get off the ground. See p.9 of the Saunders/Wallace paper I linked in the previous post.

The apparent (non-epistemic) vagueness in branch number is a puzzling part of the Everett interpretation, but participants in the debate on probability (at least for the sake of argument) generally presuppose that it can be made sense of. If this is right, then the kind of probability measure that is suggested by a principle of indifference is unavailable to us.

On your supervaluationist suggestion, I worry about whether the indeterminacy which results from it will give the ‘genuine uncertainty’ which is needed to feed into the decision-theoretic justification of the Born rule. Greaves, at least, motivated her view by the intuitive thought that ‘uncertainty requires a (determinate) fact of the matter to be uncertain about’. I have to say that this thought appeals to me more than your intuition that if you know that p is indeterminate, you should be uncertain whether p. But I haven’t thought much about supervaluationism and credences, so I’ll leave that debate to you and Robbie and continue to watch with interest!

by Alastair September 4, 2008 at 10:55 amRobbie,

Just a word of clarification. I actually intended the decision theoretic to be a method for *defining* our credences in vague/indeterminate propositions, rather than just a check up to see whether a proposal works or not. By this I just mean that we start off with a preference ordering on various actions – where actions just tell us what consequences would happen on each precisification (i.e. a function from precisifications to consequences) – and work back to a credence function from that. So the view I was toying with was: local consequence + ~”supertruth=truth” + classical probability functions (defined along the lines above.)

I agree that the fuzzy credence view doesn’t work well with the indeterminate future – this is basically why I said I didn’t like that view (I was hoping for a unified view for vagueness and other kinds of indeterminacy – but if it turns out that the fuzzy credence framework works best for vagueness I may have to give that up.) The degree supervaluationism sounds very interesting, but again, I can’t yet see how it ties the link with rational action, which I find very important.

Anyway – I’m glad you did hijack (I’m more interested in these issues than the QM stuff anyway :-p.)

Al,

Yes, I see the worry here. However, slight rotations of the basis result in changes to the mod squared amplitudes as well – so by those standards applications of the Born rule are illegitimate too.

But I think I can put the worry in a way that it doesn’t matter that the *number* of branches isn’t well defined. Suppose that it is determinately the case that two branches, b1 and b2, are made after some measurement, even if it’s vague which other branches are there. The principle of indifference says I should divide my credences equally between b1 and b2. Now that is incompatible with the Born rule, if they are in a non-trivial superposition, even if the numerical values for each branch are both Cr(b1)=Cr(b2)=1/2, 1/3, 1/4 etc.. depending on whether there are 2, 3, 4 etc… branches.

Your last point is definitely a legitimate worry. But let me put some pressure on your intuition. You say that once you know that p is indeterminate, then there can’t be any uncertainty about p – so Cr(p)= 1 or Cr(p)= 0. But then there are some difficult questions: should you accept a bet on p, or a bet on ~p? Presumably you should accept arbitrarily large bets on one or the other – but that doesn’t sound right to me. Even if you have a non classical probability function (as Robbie suggested) then there are still going to hard questions like this to answer. So I find it really difficult to maintain there is no uncertainty here.

by Andrew September 4, 2008 at 1:50 pmInteresting point – it seems right that the indifference problem remains even after we take indeterminacy of branch number into account. So this is another reason to want to go with view 3) rather than view 1). I’ll put this to Simon and David and see what they say…

I don’t feel that the betting considerations are knockdown. In a case where we know that it is indeterminate whether p, and we’re asked to bet on whether p, the bet itself seems ill-defined. To put it crudely, the appropriate response would be to make an indeterminate bet! At least, we seem to need a positive account of what the indeterminacy consists in before it’s clear what the bet would amount to. But I’m finding it hard to get my head round this, so maybe I’m missing something obvious.

by Alastair September 4, 2008 at 6:52 pm[…] Lastly something I roughly sketched in an earlier post: credences in self-locating propositions are governed by the Principle of Indifference. If you know […]

by Self-locating uncertainty in the Everettian multiverse « Possibly Philosophy September 9, 2008 at 1:15 am[…] Possibly Philosophy: Uncertainty in the Many Worlds Theory […]

by Quark Philosophy Blog Awards « Evolving Thoughts September 3, 2009 at 1:58 am