My BPhil thesis is finally finished so I thought I’d post it here for anyone who’s interested.

## Posts Tagged ‘Probability’

## Vagueness and uncertainty

June 17, 2009## Help! My credences are unmeasurable!

September 29, 2008This is a brief follow up to the puzzle I posted a few days ago, and Kenny’s very insightful post and the comments to his post, where he answers a lot of the pressing questions to do with the probability and measurability of various events.

What I want to do here is just note a few probabilistic principles that get violated when you have unmeasurable credences (mostly a summary of what Kenny showed in the comments), and then say a few words about the use of the axiom of choice.

**Reflection**. Bas van Fraassens’ reflection principle states, informally, that if you are certain that your future credence in p will be x, then your current credence in p should be x (ignoring situations where you’re certain you’ll have a cognitive mishap, and the problems to do with self locating propositions.) If p_{n} says “I will guess the n’th coin toss from the end correctly”, then Kenny shows, assuming **translation invariance** (that Cr(p)=Cr(q) if p can be gotten from q by uniformly flipping the values of tosses indexed by a fixed set of naturals for each sequence in q) that once we have chosen a strategy, but before the coins are flipped, there will be an n such that Cr(p_{n}) will be unmeasurable (so fix n to be as such from now on.) However, given reasonable assumptions, no matter how the coins land before n, once you have learned that the coins have landed in such and such a way, Cr(p_{n})=1/2. Thus you may be certain that you *will* have credence 1/2 in p_{n} even though you’re credence in p_{n} is currently unmeasurable.

**Conglomerability**. This says that if you have some propositions, S, which are pairwise incompatible, but jointly exhaust the space, then if your credence in p conditional on each element of S is in an interval [a, b], then your unconditional credence in p should be in that interval. Kenny points out that conglomerability, as stated, is violated here too. The unconditional probability of p_{n} is unmeasurable, but the conditional probability of p_{n} on the outcome of each possible sequence up to n, is 1/2. (In this case, it is perhaps best to think of the conditional credence as what you’re credence would be after you have learned the outcome of the sequence up to n.) You can generate similar puzzles in more familiar settings. For example what should your credence be that a dart thrown at the real line will hit the Vitali set? Presumably it should be unmeasurable. However, conditional on each of the propositions , which partition the reals, the probability should be zero – the probability of hitting exactly one point from countably many.

**The Principal Principle**. States, informally, that if you’re certain that the objective chance of p is x, then you should set your credence to x (provided you don’t have any ‘inadmissible’ evidence concerning p.) Intuitively, chances of simple physical scenarios like p_{n }shouldn’t be unmeasurable. This turns out to be not so obvious. It is first worth noting that the argument that your credence in p_{n} is unmeasurable doesn’t apply to the chance of p_{n}, because there are physically possible worlds that are doxastically impossible for you (i.e. worlds where you don’t follow the chosen strategy at guess n.) Secondly, although the chance in a proposition *can* change over time, so it could technically be unmeasurable before any coin tosses, but 1/2 before the nth coin toss, the way that chances evolve is governed by the physics of the situation — the Schrodinger equation, or what have you. In the example we described we said nothing about the physics, but even so, it does seem like we can consistently stipulate that the chance of p_{n} remains constant at 1/2. In such a scenario we would have a violation of the principal principle – before the tosses you can be certain that the chance of p_{n} is 1/2, but your credence in p_{n} is unmeasurable. (Of course, one could just take this to mean you can’t really be certain you’re going to follow a given strategy in a chancy universe – some things are beyond your control.)

Anyway, after telling some people this puzzle, and the related hats puzzle, a lot of people seemed to think that it was the axiom of choice that’s at fault. To evaluate that claim requires a lot of care, I think.

Usually to say the Axiom of Choice is false, is to say that there are sets which cannot be well ordered, or something equivalent. And presumably this depends on which structure accurately fits the extension of sethood and membership, the extension of which is partially determined by the linguistic practices of set theorists (much like ‘arthritis’ and ‘beech’, the extension of ‘membership’ cannot be primarily determined by usage of the ordinary man on the street.) After all there are many structures that satisfy even the relatively sophisticated axioms of first order ZF, only some of which satisfy the axiom of choice.

If it is this question that is being asked, then the answer is almost certainly: yes, the axiom of choice is true. The structure with which set theorists, and more generally mathematicians, are concerned with is one in which choice is true. (It’d be interesting to do a survey, but I think it is common practice in mathematics not to even mention that you’ve used choice in a proof. Note, it is a different question whether mathematicians think the axiom of choice is true – I’ve found often, especially when they realise they’re talking to a “philosophy” student, they’ll be suddenly become formalists.)

But I find it very hard to see how this answer has *any* bearing on the puzzle here. What structure best fits mathematical practice seems to have no implications whatsoever on whether *it is possible for an idealised agent to adopt a certain strategy.* This has rather to do with the nature of possibility, not sets. What possible scenarios are concretely realisable? For example, can there be a concretely realised agent whose mental state encodes the choice function on the relevant partition of sequences? (Where a choice function here needn’t be a set, but rather, quite literally, a physical arrangement of concrete objects.) Or another example: imagine a world with some number of epochs. In each epoch there is some number of people – all of them wearing green shirts. Is it possible that exactly one person in each epoch wears a red shirt instead? Surely the answer is yes, whether any person wears a red shirt or not is logically independent of whether the other people in the epoch wear a red shirt. A similar possibility can be guaranteed by Lewis’s principle of recombination – it is possible to arbitrarily delete bits of worlds. If so, it should be possible that exactly one of these people exists in each epoch. Or, suppose you have two collection of objects, A and B. Is it possible to physically arrange these objects into pairs such that either every A-thing is in one of the pairs, or every B-thing is in one of the pairs. Providing that there are possible worlds are large enough to contain big sets, it seems the answer again is yes. However, all of these modal claims correspond to some kind of choice principle.

Perhaps you’ll disagree about whether all of these scenarios are metaphysically possible. For example, can there be spacetimes large enough to contain all these objects? I think there is a natural class of spacetimes that can contain arbitrarily many objects – those constructed from ‘long lines’ (if is an ordinal, a long line is under the lexigraphic ordering, which behaves much like the positive reals, and can be used to construct large equivalents of .) Another route of justification might be the principle that if a proposition is mathematically consistent, in that it is true in some mathematical structure, that structure should have a metaphysically possible isomorph. Since Choice is certainly regarded to be mathematically consistent, if not true, one might have thought that the modal principles to get the puzzle of the ground should hold.

## Guessing the result of infinitely many coin tosses

September 22, 2008What is the probability that an infinite number of coin tosses all land heads up? In a relatively recent analysis paper Tim Williamson argues, convincingly I think, that the probability must be 0. What I’m going to say here is to do with a related puzzle, and may shed some light on this question, although I haven’t fully absorbed the consequences. But either way, it strikes me as very weird, so any comments would be welcome.

The puzzle I’m concerned with involves *guessing *the results of infinitely many coin tosses. The setup is as follows:

For each n > 0, at hours past 12pm the following is going to happen: aware of the time, you are going to guess either heads or tails, and then I am going to flip a coin and show you the result so you can see if you are right or wrong. This process may have to be done at different speeds to fit it all in to the hour between 12pm and 1pm.

[Note how this differs from the Williamson sequence of coin tosses in that it is a backwards supertask.] The question I’m interested in is: how well can you do at this game? For example, can you, with absolute certainty, guess every result correctly? Intuitively, the answer is ‘no’, and unsurprisingly the answer to this question is indeed ‘no’. Can you adopt a strategy for guessing such that you are guaranteed to get at least one guess right? Intuitively the answer is still ‘no’ – no matter how unlikely, it is still *possible* that you always guess the wrong answer, and it is hard to see how adopting a certain way of guessing will get you out of this if you are extremely unlucky. Despite the intuition, however, the answer to this second question is in fact ‘yes’! There is a way of guessing such that you are guaranteed to guess right at least once. If that doesn’t strike you as weird yet, think about it a bit more before reading on.

Things get even weirder. It turns out that there’s a way of guessing such that, following it, you are completely guaranteed, no matter how the coins land, to guess all but finitely many of the tosses correctly. That is, if you follow this strategy, you are guaranteed to only make finitely many mistakes. Among other things, this means that at some point after 12pm you won’t have made any mistakes at all! You would have guessed an infinite sequence of coin tosses correctly. So it follows that there is a way of guessing such that it is guaranteed that if you follow it, you will guess the result of a fair toin coss correctly infinitely many times in succession.

The construction of the strategy is actually relatively simple (and it resembles the solution to this infinite hat problem closely.) Firstly we divide the space of all possible complete sequences of heads and tails the coin could land in into equivalence classes as follows. By a possible total sequence of heads and tails I mean a list describing the result of each coin toss at each time 1/n past 12. The sequences are divided into equivalence classes according to whether they agree at all but finitely many places in the sequence. I.e. let a ~ b iff a and b disagree about how the coin lands at most at finitely many places. Now, with the help of the axiom of choice, we can pick a representative sequence from each equivalence class – so we have a choice function that associates each equivalence class with an element it contains.

The stategy you should adopt runs as follows. At 1/n hrs past 12 you should be able to work out exactly which equivalence class the completed sequence of heads and tails that will eventually unfold is in. You have been told the result of all the previous tosses, and you know there are only finitely many tosses left to go, so you know the eventual completed sequence can only differ from what you know about it at finitely many places. Given you know which equivalence class you’re in, you just guess *as if* the representative of that equivalence class was correct about the current guess. So at 1/n hrs past 12 you just look at how the representative sequence says the coin will land and guess accordingly.

Now, since the representative sequence and the actual sequence of heads and tails that occurs are in the same equivalence class, they must only differ at finitely many places. So, if you guessed according to the representative sequence, you have only made finitely many mistakes.

Now this raises some further puzzles. For example, suppose that it’s half past twelve, and you know the representative sequence predicts that the next toss will be heads. What should your credence that it will land heads be? On the one hand it should be 1/2 since you know it’s a fair coin. But on the other hand, you know that the chance that this is one of the few times in the hour that you guess incorrectly is very small. In fact, in this scenario it is “infinitely” smaller in comparison, so that your credence in heads should be 1. So it seems this kind of reasoning violates the Principal Principle.