Posts Tagged ‘Probability’


Vagueness and uncertainty

June 17, 2009

My BPhil thesis is finally finished so I thought I’d post it here for anyone who’s interested.


Help! My credences are unmeasurable!

September 29, 2008

This is a brief follow up to the puzzle I posted a few days ago, and Kenny’s very insightful post and the comments to his post, where he answers a lot of the pressing questions to do with the probability and measurability of various events.

What I want to do here is just note a few probabilistic principles that get violated when you have unmeasurable credences (mostly a summary of what Kenny showed in the comments), and then say a few words about the use of the axiom of choice.

Reflection. Bas van Fraassens’ reflection principle states, informally, that if you are certain that your future credence in p will be x, then your current credence in p should be x (ignoring situations where you’re certain you’ll have a cognitive mishap, and the problems to do with self locating propositions.) If pn says “I will guess the n’th coin toss from the end correctly”, then Kenny shows, assuming translation invariance (that Cr(p)=Cr(q) if p can be gotten from q by uniformly flipping the values of tosses indexed by a fixed set of naturals for each sequence in q) that once we have chosen a strategy, but before the coins are flipped, there will be an n such that Cr(pn) will be unmeasurable (so fix n to be as such from now on.) However, given reasonable assumptions, no matter how the coins land before n, once you have learned that the coins have landed in such and such a way, Cr(pn)=1/2. Thus you may be certain that you will have credence 1/2 in pn even though you’re credence in pn is currently unmeasurable.

Conglomerability. This says that if you have some propositions, S, which are pairwise incompatible, but jointly exhaust the space, then if your credence in p conditional on each element of S is in an interval [a, b], then your unconditional credence in p should be in that interval. Kenny points out that conglomerability, as stated, is violated here too. The unconditional probability of pn is unmeasurable, but the conditional probability of pn on the outcome of each possible sequence up to n, is 1/2. (In this case, it is perhaps best to think of the conditional credence as what you’re credence would be after you have learned the outcome of the sequence up to n.) You can generate similar puzzles in more familiar settings. For example what should your credence be that a dart thrown at the real line will hit the Vitali set? Presumably it should be unmeasurable. However, conditional on each of the propositions \mathbb{Q}+\alpha, \alpha \in \mathbb{R}, which partition the reals, the probability should be zero – the probability of hitting exactly one point from countably many.

The Principal Principle. States, informally, that if you’re certain that the objective chance of p is x, then you should set your credence to x (provided you don’t have any ‘inadmissible’ evidence concerning p.) Intuitively, chances of simple physical scenarios like pn shouldn’t be unmeasurable. This turns out to be not so obvious. It is first worth noting that the argument that your credence in pn is unmeasurable doesn’t apply to the chance of pn, because there are physically possible worlds that are doxastically impossible for you (i.e. worlds where you don’t follow the chosen strategy at guess n.) Secondly, although the chance in a proposition can change over time, so it could technically be unmeasurable before any coin tosses, but 1/2 before the nth coin toss, the way that chances evolve is governed by the physics of the situation — the Schrodinger equation, or what have you. In the example we described we said nothing about the physics, but even so, it does seem like we can consistently stipulate that the chance of pn remains constant at 1/2. In such a scenario we would have a violation of the principal principle – before the tosses you can be certain that the chance of pn is 1/2, but your credence in pn is unmeasurable. (Of course, one could just take this to mean you can’t really be certain you’re going to follow a given strategy in a chancy universe – some things are beyond your control.)

Anyway, after telling some people this puzzle, and the related hats puzzle, a lot of people seemed to think that it was the axiom of choice that’s at fault. To evaluate that claim requires a lot of care, I think.

Usually to say the Axiom of Choice is false, is to say that there are sets which cannot be well ordered, or something equivalent. And presumably this depends on which structure accurately fits the extension of sethood and membership, the extension of which is partially determined by the linguistic practices of set theorists (much like ‘arthritis’ and ‘beech’, the extension of ‘membership’ cannot be primarily determined by usage of the ordinary man on the street.) After all there are many structures that satisfy even the relatively sophisticated axioms of first order ZF, only some of which satisfy the axiom of choice.

If it is this question that is being asked, then the answer is almost certainly: yes, the axiom of choice is true. The structure with which set theorists, and more generally mathematicians, are concerned with is one in which choice is true. (It’d be interesting to do a survey, but I think it is common practice in mathematics not to even mention that you’ve used choice in a proof. Note, it is a different question whether mathematicians think the axiom of choice is true – I’ve found often, especially when they realise they’re talking to a “philosophy” student, they’ll be suddenly become formalists.)

But I find it very hard to see how this answer has *any* bearing on the puzzle here. What structure best fits mathematical practice seems to have no implications whatsoever on whether it is possible for an idealised agent to adopt a certain strategy. This has rather to do with the nature of possibility, not sets. What possible scenarios are concretely realisable? For example, can there be a concretely realised agent whose mental state encodes the choice function on the relevant partition of sequences? (Where a choice function here needn’t be a set, but rather, quite literally, a physical arrangement of concrete objects.) Or another example: imagine a world with some number of epochs. In each epoch there is some number of people – all of them wearing green shirts. Is it possible that exactly one person in each epoch wears a red shirt instead? Surely the answer is yes, whether any person wears a red shirt or not is logically independent of whether the other people in the epoch wear a red shirt. A similar possibility can be guaranteed by Lewis’s principle of recombination – it is possible to arbitrarily delete bits of worlds. If so, it should be possible that exactly one of these people exists in each epoch. Or, suppose you have two collection of objects, A and B. Is it possible to physically arrange these objects into pairs such that either every A-thing is in one of the pairs, or every B-thing is in one of the pairs. Providing that there are possible worlds are large enough to contain big sets, it seems the answer again is yes. However, all of these modal claims correspond to some kind of choice principle.

Perhaps you’ll disagree about whether all of these scenarios are metaphysically possible. For example, can there be spacetimes large enough to contain all these objects? I think there is a natural class of spacetimes that can contain arbitrarily many objects – those constructed from ‘long lines’ (if \alpha is an ordinal, a long line is \alpha \times [0, 1) under the lexigraphic ordering, which behaves much like the positive reals, and can be used to construct large equivalents of \mathbb{R}^4.) Another route of justification might be the principle that if a proposition is mathematically consistent, in that it is true in some mathematical structure, that structure should have a metaphysically possible isomorph. Since Choice is certainly regarded to be mathematically consistent, if not true, one might have thought that the modal principles to get the puzzle of the ground should hold.


Guessing the result of infinitely many coin tosses

September 22, 2008

What is the probability that an infinite number of coin tosses all land heads up? In a relatively recent analysis paper Tim Williamson argues, convincingly I think, that the probability must be 0. What I’m going to say here is to do with a related puzzle, and may shed some light on this question, although I haven’t fully absorbed the consequences. But either way, it strikes me as very weird, so any comments would be welcome.

The puzzle I’m concerned with involves guessing the results of infinitely many coin tosses. The setup is as follows:

For each n > 0, at \frac{1}{n} hours past 12pm the following is going to happen: aware of the time, you are going to guess either heads or tails, and then I am going to flip a coin and show you the result so you can see if you are right or wrong. This process may have to be done at different speeds to fit it all in to the hour between 12pm and 1pm.

[Note how this differs from the Williamson sequence of coin tosses in that it is a backwards supertask.] The question I’m interested in is: how well can you do at this game? For example, can you, with absolute certainty, guess every result correctly? Intuitively, the answer is ‘no’, and unsurprisingly the answer to this question is indeed ‘no’. Can you adopt a strategy for guessing such that you are guaranteed to get at least one guess right? Intuitively the answer is still ‘no’ – no matter how unlikely, it is still possible that you always guess the wrong answer, and it is hard to see how adopting a certain way of guessing will get you out of this if you are extremely unlucky. Despite the intuition, however, the answer to this second question is in fact ‘yes’! There is a way of guessing such that you are guaranteed to guess right at least once. If that doesn’t strike you as weird yet, think about it a bit more before reading on.

Things get even weirder. It turns out that there’s a way of guessing such that, following it, you are completely guaranteed, no matter how the coins land, to guess all but finitely many of the tosses correctly. That is, if you follow this strategy, you are guaranteed to only make finitely many mistakes. Among other things, this means that at some point after 12pm you won’t have made any mistakes at all! You would have guessed an infinite sequence of coin tosses correctly. So it follows that there is a way of guessing such that it is guaranteed that if you follow it, you will guess the result of a fair toin coss correctly infinitely many times in succession.

The construction of the strategy is actually relatively simple (and it resembles the solution to this infinite hat problem closely.) Firstly we divide the space of all possible complete sequences of heads and tails the coin could land in into equivalence classes as follows. By a possible total sequence of heads and tails I mean a list describing the result of each coin toss at each time 1/n past 12. The sequences are divided into equivalence classes according to whether they agree at all but finitely many places in the sequence. I.e. let a ~ b iff a and b disagree about how the coin lands at most at finitely many places. Now, with the help of the axiom of choice, we can pick a representative sequence from each equivalence class – so we have a choice function that associates each equivalence class with an element it contains.

The stategy you should adopt runs as follows. At 1/n hrs past 12 you should be able to work out exactly which equivalence class the completed sequence of heads and tails that will eventually unfold is in. You have been told the result of all the previous tosses, and you know there are only finitely many tosses left to go, so you know the eventual completed sequence can only differ from what you know about it at finitely many places. Given you know which equivalence class you’re in, you just guess as if the representative of that equivalence class was correct about the current guess. So at 1/n hrs past 12 you just look at how the representative sequence says the coin will land and guess accordingly.

Now, since the representative sequence and the actual sequence of heads and tails that occurs are in the same equivalence class, they must only differ at finitely many places. So, if you guessed according to the representative sequence, you have only made finitely many mistakes.

Now this raises some further puzzles. For example, suppose that it’s half past twelve, and you know the representative sequence predicts that the next toss will be heads. What should your credence that it will land heads be? On the one hand it should be 1/2 since you know it’s a fair coin. But on the other hand, you know that the chance that this is one of the few times in the hour that you guess incorrectly is very small. In fact, in this scenario it is “infinitely” smaller in comparison, so that your credence in heads should be 1. So it seems this kind of reasoning violates the Principal Principle.


Uncertainty in the Many Worlds Theory

September 2, 2008

I’ve been thinking a bit more about probability in the Everettian picture, and I’m tentatively beginning to settle on a view. Obviously, there is a lot literature on this topic, and, as always, I have read very little of it.

There are roughly two problems. The incoherence problem is: how do we make sense of probability at all in the Everettian picture. Firstly, there is no uncertainty because we know all there is to know about the physical state of the world – roughly, everything happens in some branch. Secondly, how can there be probability if there is no uncertainty? The second problem is the quantitative problem: how do we get the probabilities of a branch happening to accord with the Born rule. A lot of progress has been made on the second problem (e.g. Deutsch, Wallace) by factoring probabilities out of the expected utility equation from decision theory, but work is still needed to answer the incoherence problem (after all, decision theory only tells us how to act when we are uncertain about the future.)

There are several ways to respond to the incoherence problem. We can deny the connection between uncertainty and probability (Greaves), or we can try and make sense of subjective uncertainty even when we know the complete physical state of the world. Here are some options:

  1. Self-locating uncertainty: even if you know everything there is to know about the physical state of the universe, you can still be uncertain about where you are located in that world (for example, in Lewis’s omniscient God’s example.) (Saunders & Wallace.)
  2. Caring measure: there is no subjective uncertainty when you know the complete state of the world, however you can make sense of decision theory by interpreting probabilities as degrees to which you care about your future selves. (Greaves.)
  3. Branches as possible worlds: if the Everettian treats branches as possible worlds, they’d be in a similar situation to a modal realist. You know everything there is to know about the state of the possibility space, yet you can still be uncertain which world actually obtains. On this view precisely one of the branches is actual, and our uncertainty is about which one this is. (From the comments of my last post, it seems that Alastair holds the branches as worlds view – but I don’t know what he’d make of this interpretation of probability, or the idea of a single actual branch.)
  4. Uncertainty due to indeterminateness. This is the view I’m toying with at the moment. Here’s the analogy: we may know everything there is to know about the physical state of the world, including how many hairs Fred has and other relevant facts, but we may still be uncertain about whether Fred is bald. This is because it may be vague whether Fred is bald.

Self-locating uncertainty. The rough idea for option (1) is to treat people, and other objects, as linear (non-branching) four dimensional worms. The branching structure of the universe ensures that, if people can survive branching at all, they overlap each other frequently in such a way that, before a branching, there will be many colocated people, that share a common temporal part until the branching. To see how self-locating uncertainty arises out of this, consider Lefty and Righty – two colocated people who will shortly split along two different branches. Since Lefty is in an epistemically indistinguishable situation from Righty, he should be uncertain as to which person he his, even though he knows every de dicto fact about the world. These cases are familiar. Take Perry’s example of two people lost in a library. They both have a map with a cross where each person is, and they know all the physical facts about the library. But they may still be uncertain as to which cross represents them (provided they are both in rooms that are indiscernable from each other from the inside.)

This certainly provides a solution to the incoherence problem, but I can’t see how it will extend to an answer to the quantitative problem. My worry is based on a principle due to Adam Elga: you should assign equal credence to subjectively indistinguishable predicaments within the same world (see my earlier post.) The temporal slices of Lefty and Righty at a time just before the branch are identical, so they must be in the same (narrow) mental states, and thus must represent subjectively indistinguishable predicaments with in the same world. So according to Elga’s principle, Lefty must be 50% sure he’s Lefty, and 50% sure he’s Righty, and similarly for Righty. After all Lefty and Righty are receiving exactly the same evidence – even if God told Lefty he was Lefty, he should remain uncertain because he knows that Righty would have received exactly the same evidence, and it could easily have been him that is wrong. In essence, the problem with self-locating uncertainty is Lefty should not proportion his credences to the Born rule, but to the principle of indifference – for that is the principle according to which you should proportion your self-locating beliefs. (Note also that the Born rule, and the principle of indifference appear to be incompatible on first looks.)

Caring measure. On this view there is no subjective uncertainty to be made sense of. Rather than many colocated worms, you can just have one person stage with multiple temporal counterparts. However, one can still make sense of probability in terms of the degrees to which you care about each of your branches. Expected utility is really just the sum of the utilities of your branches proportioned to how much you care about each of those branches. Unfortunately this requires treating setting your caring degrees according to the Born rule as a primitive principle of rationality.

This aside, my main problem with this approach is that I can’t see how it gets us the statistical predictions that quantum mechanics makes. The relation between frequencies, chances and credences is clear to me, but I can’t see how the caring measure will explain the statistical data. (You might think there is also a problem with indifference, because you should care just as much about all your branches – I’m not so convinced by this version of the principle though (anyone seen ‘The Prestige‘?))

Branches as worlds. Although this initially looks like probability will be just like probability for the modal realist, it is not so simple. For the modal realist there is exactly one actual world, and uncertainty is just uncertainty about which world is actual, even though no-one is uncertain about what the whole possibility space looks like. However, it is not analogous – for the modal realist the actual world is specified indexically as that maximally connected region of space-time are a part of. For the Everettian, no such specification is possible since all the worlds are connected, and we overlap multiple worlds. The only way would be to take being the actual branch as metaphysically primitive – which does not sound attractive to me at all.

Uncertainty due to indeterminateness. Like the self-locating belief proposal, this proposal tries to make sense of uncertainty even when the agent has a complete physical description of the world at hand. Here’s the analogy: I may have a complete description of the world, down to the finest details of, including the number of hairs on Fred’s head, the way we use English, and still be uncertain as to whether Fred is bald, if Fred is a borderline case of being bald.

Will this analogy carry over to talk about the future in EQM? Why are sentences about the future indeterminate in branching time, rather than always true, (or always false, or whatever.) Here’s why. Our everyday time talk has a temporal logic of linear time, for example we find ‘tomorrow p and tomorrow ~p’ inconsistent, and so on. (This might be because our histories are always linear?) Thus, supposing our tense talk gets given a Kripke frame type interpretation, this frame must be a linear order. However, there are many different (maximal) linearly ordered sets of times for our temporal talk to latch on to – each branch will do (note: I’m not assuming the quantum state is fundamentally cut up at the world joints, nonetheless, these world things make better interpretations than the gruesome none-worlds.) Since there are many candidate linear orders to make our tense talk true, we can supervaluate them out, but keep our ordinary temporal logic without having to select a special branch. On this supervaluationist approach, many sentences will come out indeterminate – which allows us to assign them non-trivial probabilities, even when we know that at the metaphysical level (and in the meta-language for English) all the possibilities are actualised. But this is in just the same sense as we know that some admissible interpretations of English make Fred bald, and others don’t.

Of course, I haven’t said anything about the quantitative problem. It’s not clear that you can just lift the decision theoretic answer and combine it with any old answer to the incoherence problem. This is for the same reason the self-locating uncertainty proposal failed: there may be other principles that govern the evolution of credences in self-locating propositions/credences in indeterminate propositions/whatever your answer to the IQ is, that override the probabilities we need for quantum mechanics.


Everett and Elga’s Principle of Indifference

August 20, 2008

I’m a bit out of my depth on this one, knowing nothing about physics, so if this is all nonsense hopefully some one out there will set me straight. This is a blog after all!

I’ve been thinking about Adam Elga’s restricted principle of indifference (see here) recently. The principle plays a central role in his argument for thirding, in the Sleeping Beauty experiment, and seems to be plausible on independent grounds.

To state the principle we need to assume that the objects of our credential attitudes are sets of centred worlds, or something of this sort. The basic idea is that we can be uncertain about our place in the world, just as we can be uncertain about the way the world is (as demonstrated by Lewis’s twin gods thought experiment.) A centred world is just a world/time/individual triple, where the time exists at the world, and the individual exists at that time at that world. As Elga puts it, a centred world is a maximally specific predicament for a person to be in. You’re in the predicament iff you’re the individual, at that time at that world.

Say that two centred worlds are similar iff (1) their world coordinates are the same, (2) they are subjectively indistinguishable, in the sense that the individual coordinates both have the same beliefs, are having the same experiences, have the same memories, et cetera. The principle of indifference states that you should divide your credences equally between similar centred worlds. In particular if you know you are in one of two similar centred worlds, then you should assign each credence of a half. Even if you have strong evidence that you’re in one of the scenarios and not the other, you know your counterpart will have will have received equally strong evidence, and that at least one of you must be mistaken. (For a proper defence of the principle, I recommend you read the paper, if you haven’t already. It’s a good read.)

Now this is where I become slightly less certain, but it seems to me that if the Everett ‘many worlds’ interpretation of quantum mechanics is correct, and assuming the Principal Principle, we’re going to have to admit exceptions to this principle (and they’re going be widespread.) To put it more carefully, if the Everett interpretation can assign events chances which accord with the Born rule, then there will be exceptions to the Principle of Indifference (perhaps the Everettian can just stipulate that chances go by the Principle of Indifference, and not the Born rule, but then it is hard to see how the Everettian picture is being confirmed by past observation. For example, it needs to explain why is it likely that particles in the double slit experiment make that wavy pattern on the wall over time.)

So here is the counterexample. Depending on the outcome of a quantum measurement on Sunday you will be put to sleep and moved into one of two rooms, room A or room B, and awoken on Monday. From the inside A and B look exactly the same. Now, we may assume that the measurement creates two branches, and that we can ignore any subsequent branching as irrelevant to the probabilities involved. Lastly, and most importantly, we assume that the branches are not equally likely. That is, that one outcome of the experiment has a higher objective chance than the other, and that you know what this chance is. To fix ideas, you know that the chance that you end up in room A is 1/3, and that the chance you end up in room B is 2/3 (I think that it is a coherent quantum mechanical scenario that exactly two branches can be created of unequal chances.)

Here’s the problem. When you wake up on Monday, you should have credence 1/3 that you’re in room A, since you know that it’s less likely that the quantum experiment resulted in your being moved to room A. This is just an application of the Principal Principle: you know the chance that you’re in room A is 1/3, so you should set your credences as such. However, both your worms wake up on Monday in a subjectively indistinguishable state. You both remember being put to sleep, are both seeing indiscernible rooms, and both know the chance that you end up in room A, and room B respectively. What is more, these scenarios are both part of the same possible world. They are both concrete, so they’re both actual, and there’s only one actual world. Or, if you’re a modal realist, they’re both spatiotemporally connected, and they aren’t causally isolated. Thus the individuals in each branch are world-mates by anybodies standards. So, we have two similar centred worlds – they agree on world coordinate, and are subjectively indistinguishable – so both your successors in each branch should assign them equal credence, by the principle of indifference. This means, you should have credence 1/2 that you are in room A, which contradicts our previous credence of 1/3.

I’m not yet sure what to make of this. Is it a problem for the Everett interpretation (after all, we know they have difficulties with probability anyway), or should we instead reject the probabilistic principles we appealed to (the principle of indifference, and the principal principle)?