## Is ZFC Arithmetically Sound?

February 12, 2010

I recently stumbled across this fascinating discussion on FOM. The question at stake: why should we believe that ZFC doesn’t prove any false statements about numbers? That is, while of course we should believe ZFC is consistent and $\omega$-consistent, that is no reason to expect it not to prove false things: perhaps even false things about numbers that we could, in some sense, verify.

Of course – the “in some sense” is important, as Harvey Friedman stressed in one of the later posts. After all ZFC can prove everything PA can, so whatever the false consequences of ZFC are, we couldn’t prove them from PA. There were a number of interesting suggestions. For example it might prove the negation of something we have lots of evidence for (e.g. something like Goldbach’s conjecture where we have verified lots of its instances – except unlike GC it can’t be $\Pi^0_1$.) Or perhaps it would prove there was some Turing machine that would halt, but which never would if we were to make it. There’s a clear sense that it’s false that the TM halts, even though we can’t verify it conclusively.

Anyway, while reading all this I became quite a lot less sure about some things I used to be pretty certain about. In particular, a view I had never thought worth serious consideration: the view that there isn’t a determinate notion of being ‘arithmetically sound’. Or more transparently, the view that there’s no such thing as *the* standard model of arithmetic, i.e. there are lots of equally good candidate structures for the natural numbers, and that there’s no determinate notion of true and false for arithmetical statements. Now I have given it fair consideration I’m actually beginning to be swayed by it. (Note: this is not to say I don’t think there’s a matter of fact about statements concerning certain physical things like the ordering of yearly events in time, or whether a physical Turing machine will eventually halt. It’s just I think this could turn out to be contingent. It’ll depend, I’m guessing, on the structure of time and the structure of space in which the machine tape is embedded. Thus, on this view, arithmetic is like geometry – there is no determinate notion of true-for-geometry, but there is is a determinate notion of true of the geometry of our spacetime, which actually turns out to be a weird geometry.)

Something that would greatly increase my credence in this view would be if we could find a pair of “mysterious axioms”, (MA1) and (MA2), which had the following properties. (a) they are like the continuum hypothesis, (CH), in that they are independent of our currently accepted set theory, say ZFC plus large cardinals, and, like (CH), it is unclear how things would have to be for it to be true or false. (b) unlike (CH) and its negation, (MA1) and (MA2) its negation disagree about some arithmetical statement.

Let me first say a bit more about (a). On some days of the week I doubt there are any sets, or that there are as many things as there would need to be for there to be sets. However I believe in plural quantification, and believe that if there *were* enough things, then we could generate models for ZFC just by considering pluralities of ordered pairs. But even given all that I don’t think I know what things would have to be like for (CH) to be true. If there is a plurality of ordered pairs that satisfies ZF(C), then there is one that satisfies ZFC+CH, namely Gödel’s constructible universe, and also one that doesn’t satisfy CH. So even given we have all these objects, it is not clear which relation should represent membership between them. I can only think of two reasons to think there is a preferred relation: (1) if there were a perfectly natural relation, membership, between these objects which somehow set theorists are able latch onto and intuit things about from their armchair or (2) there is only one such relation (up to isomorphism anyway) compatible with the linguistic practices of set theorists. Neither of these seem particularly plausible to me.

Now let me say a bit about (b). Note firstly that Con(ZFC) is an arithmetical statement independent of ZFC. However it is not like (CH) in that we have good reason to believe its negation is false. And more to the point, its negation is inconsistent with there being any inaccessibles. (MA) is going to have to be subtler than that.

It is also instructive to consider the following argument that ZFC *is* arithmetically sound. Suppose it’s determinate that there’s an inaccessible (a reasonable assumption, if we grant there are enough things, and that the truth of these claims are partially fixed by the practices of set theorists.) Let $\kappa$ be the first one. Then $V_\kappa$ is a model for ZFC which models every true arithmetical statement (because the natural numbers are an initial segment of $\kappa$ [edit: and arithmetical statements are absolute].) So ZFC cannot prove any false arithmetical statement. That is, determinately, ZFC is arithmetically sound. And all we’ve assumed is that it’s determinate that there’s an inaccessible.

Now I find this argument convincing. But clearly this doesn’t prove that every arithmetic statement is determinate. All it shows is that arithmetic is determinate if ZFC is. But (CH) has already brought the antecedent into doubt! So although $V_\kappa$ determinately decides every arithmetical statement correctly, it is still indeterminate what $V_\kappa$ makes true. That is, both (MA1) and (MA2) disagree not only over some arithmetical statement, but also over whether $V_\kappa$ makes that statement true.

Now maybe there isn’t anything like (MA1/2). Maybe we will always be able to find a clear reason to accept or reject any set theoretic statement that has consequences for arithmetic. But I see absolutely no good reason to think that there won’t be anything like (MA1/2). To make it more vivid, there are these really really weird results from Harvey Friedman showing that simple combinatorial principles about numbers imply all kinds of immensely strong things about large cardinals. While these simple principles about numbers look determinate they imply highly sophisticated principles that are independent of ZFC. I see no reason why someone might not find a simple number theoretic principle that implies another continuum hypothesis type statement. And in the absence of face value platonism – *a lot* of objects, and a uniquely preferred membership (perhaps natural) relation between them – it is hard to think how these statements could be determinate.

## Size and Modality

March 25, 2009

There’s this thing that’s been puzzling me for a while now. It’s kind of related to the literature on indefinite extensibility, but the thing that puzzles me has nothing to do with sets, quantification or Russell’s paradox (or at least, not obviously.) I think it is basically a puzzle about infinities, or sizes.

First I should get clear on what I mean by size. Size, as I am thinking about it, is closely related to what set theorists call cardinality. But there are some important differences.

(i) Cardinality is heavily bound up with set theory, whereas I take it that size talk does not commit us to sets. For example, I believe I can truly say there are more regions than open regions of spacetime, even if I’m a staunch nominalist. Think of size talk as analogous to plural quantification: I am not introducing new objects into the domain (sizes/pluralities), I am just quantifying over the existing individuals in a new way.

(ii) Only sets have cardinalities. I believe you can talk about the sizes of proper class sized pluralities.

(iii) Points (i) and (ii) are compatible with a Fregean theory of size. But Fregean sizes, as well as cardinalities, are thought to be had by pluralities (concepts, sets) of individuals in the domain. In particular: every size, is the size of some plurality/set. I reject this. I think there are sizes which no plurality has – I think there could have been more things than there in fact are, and thus, that there are sizes which no plurality in fact has. So sizes are inherently bound up with modality on this view – sizes are had by possible pluralities.

(iv) Frege and the set theorists both believe sizes are individuals. I’m not yet decided on this one, but Frege’s version of Hume’s principle forces the domain to be infinite, which contradicts (i) – that size talk isn’t ontologically committing. Interestingly, the plural logic version of HP is satisfiable on domains of any size – thus size’s can be always be construed as objects, if needs be. But I’m inclined to think that size talk is fundamentally grounded in certain kinds of quantified statements (e.g., “there are countably many F’s”.)

I’m going to mostly ignore (iv) from hereon and talk about sizes like they were objects, because as noted, you can consistently do this if needs be (given global choice.) That said, I can’t adopt HP because of point (iii). It’s built into the notation of HP that every size is the size of some plurality. Furthermore, Hume’s principle entails there is a largest size. (Cardinality theory say there is no largest cardinality, but this is because of an expressive failure on it’s part – proper classes don’t have cardinalities.) However, if we accept the following principle:

• Necessarily, there could have been more things.

it follows from (iii) that there is no largest size.

I think this is right. It just seems weird and arbitrary to think that there could be this largest size, $\kappa$. Why $\kappa$ and not $2^\kappa$? Clearly, it seems, there are worlds, that have this many things (think of, e.g. Forrest-Armstrong type constructions.) If not, what metaphysical fact could possibly ground this cutoff point?

What I don’t object to is there being a largest size of an actual plurality. I’m fine with arbitrariness, so long as it’s contingent. But to think that there is some size that limits the size of all possible worlds seems really strange. Just to state the existence of a limit seems to commit us to larger sizes – it’s like saying there are sizes which no possible world matches.

Here is a second principle about sizes I really like. Any collection of sizes has an upperbound. This is something that Fregean, and in a certain sense, cardinality theories of size share with me, so I’m not going to spend as long defending it. But intuitively, if you can have possible worlds with domains of sizes $\kappa$ for each $\kappa \in S$, then there should be a world containing the union of all these domains – a world with at least $Sup(S)$ things.

So this is what I mean by size. Here is the puzzle: this conception of size seems to be inconsistent. To see this we need to formalise a bit further. Take as our primitive a binary relation over sizes, < (informally “smaller than”.) For simplicity, assume we are only quantifying over sizes. Here are some principles. You can ignore 3. and 4. if you want, 1. and 2. are obvious, and 5. and 6. we have just argued for.

1. $\forall x \neg x < x$
2. $\forall xyz(x
3. $\forall xy(xy)$
4. $\forall xx\exists x(x \prec xx \wedge \forall y(y \prec xx \rightarrow x \leq y))$
5. $\forall x \exists y x
6. $\forall xx\exists x\forall y(y \prec xx \rightarrow y \leq x)$

The first three principles say that < than is a total order, which is pretty much self evident. The fourth says it’s a well order. (The inconsistency to follow doesn’t require (3) or (4).) The fifth encodes the principle that there is no largest size, and the sixth says that every collection of sizes has an upper bound.

These principles are jointly inconsistent: let xx be the plurality of self-identical things. By (6) xx has an upper bound, k. By (5) there is a size larger than k, k<k+. Since k+ is in xx, and k is an upperbound for xx, k+ $\leq$ k. Thus k<k by (2) and logic, which is impossible by (1).

There are roughly three ways out of this usually considered. Fregean theories reject (5), cardinality theory (with unrestricted plural quantifiers) deny (6) and indefinite extensibilists do something funky with the quantifiers (I’ve never really worked out how that helps, but it’s there for completeness.) Also note, the version of (6) restricted to “small” (roughly, “set-sized”) pluralities is consistent.

My own diagnosis is that the above formulation of size theory simply fails to take account of the modal nature of sizes. If we are pretending that sizes are objects at all (which, I think, is also not an innocent assumption), we should remember that just because there could be such a size, doesn’t mean in fact there is such a size. This is the same kind of fallacious reasoning encoded in the Barcan formula and its converse  (this is partly why it is very unhelpful to think of sizes as objects; we are naturally inclined to think of them as abstract, necessarily existing objects.)

Anyway – a natural way to formulate (1)-(6) in modal terms would be in a second order modal logic, perhaps with a primitive second level size comparison relation. For example (1) would be ‘necessarily, if the xx are everything, then there aren’t more xx than xx‘, (2) would be ‘necessarly for all xx, necessarily for all yy, necessarily for all zz, if there are more zz‘s than yy‘s and more yy‘s than zz‘s there are more zz‘s than xx‘s’ and (5) would be ‘necessarily, there could have been more things’. The only problem is, how would we state (6)?

I’ve been toying around with propositional quantification. Let me change the primitives slightly: instead of using $\Box p, \Diamond p$ to talk about possibility and necessity, I’ll interpret them as saying p is true in some/every accessible world with a larger domain than the current world. Also, since I don’t care about anything about a world except the size of it’s domain, let us think of the worlds not as representing maximally specific ways for things to be, but as sizes themselves. Thus the intended models of the theory will be Kripke frames of the following form: $\langle W, R \rangle$ where (i) the transitive closure of R is a well order on W, and (ii) for each w in W, R is a well order on R(w). (We’re going to have to give up S4, so we mustnt assume R is transitive on W, although it’s locally transitive on R(w) for each w in W.) Propositions are sets of worlds, so the range of the propositional quantifiers differ from world to world, since R is non-trivial.

Call R a local well order on W iff it satisfies (i) and (ii). I’m going to assert without defence (for the time being) that the formulae valid over the class of local well orders, will be the modal equivalent of (1)-(4) holding (I expect it would be fairly easy to come up with an axiomatisation of this class directly and that this axiomatisation would correspond to (1)-(4). For example, the complicated one, (4), would correspond to $\forall p(\Diamond p \rightarrow \exists q\forall r(\Box(r \rightarrow p) \rightarrow \Box(q \rightarrow \Diamond r)))$.)

The important thing is that it is possible to state (5) and (6) directly, and, it seems, consistently (although we’ll have to give up on unrestricted S4.) [Note: I may well have made some mistakes here, so apologies in advance.]

1. $\Box p \rightarrow p$
2. $\forall pqr(\Diamond(p \wedge \Diamond(q \wedge \Diamond r)) \rightarrow \Diamond(p \wedge \Diamond r))$
3. $\forall p(\Diamond p \rightarrow \exists q\forall r(\Box(r \rightarrow p) \rightarrow \Box(q \rightarrow \Diamond r)))$
4. $\Box\exists p(p \wedge \Diamond \neg p)$
5. $\forall p \Diamond\exists q(q \wedge \neg p)$

(I decided halfway through writing this post it was simpler to axiomatise a reflexive well order, so the modal (1)-(4) above don’t correspond as naturally to the original (1)-(4) – I’ll try and neaten this up at some point).

What is slightly striking is the failure of S4. Informally, if I were to have S4 I would be able to quantify over the universal proposition of all worlds, take its supremum by (6), and find a world not in the proposition by (5). This would just be a version of the inconsistency given for the extensional size theory above.

Instead, we have a picture on which worlds can only see a limited number of world sizes – to see the larger sizes you have to move to larger worlds. At no point can you “quantify” over all collections of worlds – so, at least in this sense, the view is quite close to the indefinite extensibility literature. But of course, the non-modal talk is misleading: worlds are really maximally specific propositions, and the only propositions that exist are those in the range of our propositional quantifiers at the actual world – the worlds inaccessible to the actual world in the model should just be thought of as a useful picture for characterising which sentences in the box and diamond language are true at the actual world.

## Cardinality and the intuitive notion of size

January 1, 2009

According to mathematicians two sets have the same size iff they can be put in one-one correspondence with one another. Call this Cantor’s principle:

• CP: X and Y have the same size iff there is a bijection $\sigma:X\rightarrow Y$

Replace ‘size’ by ‘cardinality’ in the above and it looks like we have a definition: an analytic truth. As it stands, however, CP seems to be a conceptual analysis – or at the very least an extensionally equivalent charaterisation. In what follows I shall call the pretheoretic notion ‘size’ and the technical notion ‘cardinality. CP thus states that two sets have the same size iff they have the same cardinality.

Taken as a conceptual analysis of sizes of sets, as we ordinarily understand it, people often object. For example, according to this definition the natural numbers are the same size as the even numbers, and the same size as the square numbers, and many more sets even sparser than these. This is an objection to the right to left direction of CP.

I’m not inclined to give these intuitions too much weight. In fact, I think the intuitive principles behind these judgements are inconsistent. Here are two principles that seem to be at work: (i) if X is a proper subset of Y then X is smaller than Y, (ii) if by uniformly shifting X you get Y, then X and Y have the same size. For example (i) is appealed to when it’s argued that the set of evens is smaller than the set of naturals. (ii) is appealed to when people argue that the evens and the odds have the same size. Furthermore, both principles are solid when we are dealing with finite sets. However (i) and (ii) are clearly inconsistent. If the evens and the odds have the same size, so do the odds and the evens\{2}. This is just an application of (ii), but intuitively, the evens\{2} stand in exactly the same relation to the odds, as the odds to the evens. By transitivity, the evens and the evens\{2} are the same size – but this contradicts (i) since one is a proper subset of the other.

In fact Gödel gave a very convincing argument for the right to left direction: (a) changing the properties of the elements of a set does not change its size, (b) two sets which are completely indistinguishable have the same size and (c) if $\sigma:X \rightarrow Y$ , each $x \in X$ can morph its properties so that x and $\sigma(x)$ are indistinguishable.  Thus, if $\sigma$ is a bijection, X can be transformed in such a way that it is indiscernable from Y, and must have the same size. (Kenny has a good discussion of this at Antimeta.)

The direction of CP I think there is a genuine challenge to is the left to right. And without it, we cannot prove there is more than one infinite size! (That is, if we said every infinite set had the same size, that would be consistent with the right to left direction of CP alone.)

What I want to do here is justify the left to right direction of CP. The basic idea is to do with logical indiscernability. If two sets have the same size, I claim, they should be logically indiscernable in the following sense: any logical property had by one, is had by the other. Characterising the logical properties as the permutation invariant ones, we can see that if two sets have the same cardinality, then they are logically indiscernable. Since we accept the inference from having the same cardinality to having the same size, this partially confirms our claim.

But what about the full claim? If two sets have the same size, how can they be distinguished logically? There must be some logically relevant feature of the set which is distinguishing them, but has nothing to do with the size. But what could that possibly be? Surely size tells us everything we can know about a set without looking at the particular characteristics of  its elements (i.e. its non-logical properties.) If there is any natural notion of size at all, it must surely involve logical indiscernability.

The interesting thing is that if we have the principle that sameness in size entails logical indiscernability we get CP in full. The logical properties over the first layer of sets of the urelemente are just those sets invariant under all permutations of the urelemente. Logical properties of these sets are just unions of collections sets of the same size. Thus logically indiscernable sets are just sets with the same cardinality!

Ignore sets for a moment. The usual setting for permutation invariance tests is on the quantifiers. A variant of the above argument can be given. This time we assume that size quantifiers are maximally specific logical quantifiers. There are two ways of spelling this out, both of which will do:

• For every logical quantifier, Q, $Sx\phi \models Qx\phi$ or $Sx\phi \models \neg Qx\phi$
• For every logical quantifier, Q, if $Qx\phi \models Sx\phi$ then $Qx\phi \equiv Sx\phi$

The justification is exactly the same as before: the size of the $\phi$‘s tells us everything we can possibly know about the $\phi$‘s without looking at the particular characteristics of the individuals $phi$‘s – without looking at their non-logical properties. Since the cardinality quantifiers have this property too, we can show that every size quantifier is logically equivalent to some cardinality quantifier and vice versa.

I take this to be a strong reason to think that cardinality is the only natural notion of size on sets. That said, there’s still the possibility that the ordinary notion of size is simply underdetermined when it comes to infinite sets. Perhaps our linguistic practices do not determine a unique extension for expressions like ‘X is the same size as Y’ for certain X and Y. One thing to note is that the indeterminacy view seems to be motivated by our wavering intuitions about sizes. But as we saw earlier, a lot of these intuitions turn out to be inconsistent, so there won’t even exist precisifications of ‘size’ corresponding to these intuitions. On the other hand, if we are to think of the size of a set as the most specific thing we can say about that set, without appealing to the particular properties of its members, then there is a reason to think this uniquely picks out the cardinality precisification.

December 2, 2008

I have a little paper writing up the supertask puzzle I posted recently. I’ve added a second puzzle that demonstrates the same problem, but doesn’t use the axiom of choice (it’s basically just a version of Yablo’s paradox), and I’ve framed the puzzles in terms of failures of the deontic Barcan formulae.

Anyway – if anyone has any comments, I’d be very grateful to hear them!

November 19, 2008

I’ve been thinking about variations on the coin tossing puzzle I posted about a month or so back. This is one I find particularly weird, and seems to violate principles of free choice. You can have a two player game where both players have a winning strategy, but only one player can win. In particular, this implies that if one player follows her winning strategy, the other player can’t. So, although at every point in the game the second player is free to follow the strategy, she is not free to follow the strategy at every point in the game. (I intend there to be some kind of scope difference there.)

The games I am interested in are defined as follows. First I shall define a round: player one chooses 1 or 0, then player two chooses 1 or 0 (having heard player one’s choice.) Player one wins if player two chooses the same number as he did, player two wins if her number is different. Next, a game is a sequence of rounds. Player 2 wins if she wins every round, player 1 wins otherwise.

A strategy for one of these games is a function taking sequences of 1’s and 0’s (provided the order type of the sequences are initial segments of the game order type) to {0, 1}. A winning strategy for a player is a strategy $\sigma$, such that, if at each point in the game, s, you played $\sigma(s)$ then you would win.

Now clearly player one does not have a winning strategy for any game that is a finite sequence of rounds – and indeed, this holds for any game that is a well founded sequences of rounds. Obviously, player two has a winning strategy, since she may always say the opposite to what player one says. Since on well founded games, only one player can have a winning strategy, player one never has a winning strategy.

Bizarrely, however, player on does have winning strategies on non-well founded games. Suppose they play on a backwards omega sequence, e.g. a move takes place at each 1/n hours past 12pm, and the game ends at 1pm. Then you divide the possible sequences that player two might play into equivalence classes according to whether they differ by at most finitely many moves. If player one picks a representative from each class, then at each point in the game he can work out what class he’s in, and he can play the same move that the representative sequence predicts player two will play. At the end he must have won all but finitely many moves (I discussed the strategy a bit more here.

So both player one and player two have a winning strategy. But clearly, they can’t both win – so it follows that at least one of them can’t follow their strategy in a given game. This is particularly weird, since at each point in the game they are free to follow their strategy – there’s nothing physically preventing them from them from doing so – but they are not free to to follow it at all of the moves.

This contradicts what I shall call the ‘free choice principle’, that if a rational agent is free and able to do something, and wants to do it, she will do it. For the game above we can formulate this as follows. Let $\Diamond_i$ be read roughly as ‘player i (i = 1 or 0) is free to make it the case that’, and let $P_in$ say ‘at round n, player i (i=1 or 0) follows his/her strategy’. Round n is the n’th round from the end of the game. The free choice principle reads:

• $\forall n (\Diamond_i P_in \rightarrow P_in)$

If at a given round each player is free to follow their strategy, then each player does follow their strategy. We assume tacitly that the players we are concerned with want to follow their strategy, and are physically able to carry it out, etc… We may formulate the principle that at each point in the game, both players are free to follow their strategy as follows

• $\forall n\Diamond_i P_in$

But this entails the impossible conclusion: $\forall n (P_1n \wedge P_2n)$. At least one player has to lose.

As far as I can see, the premise that at each point in the game each player is free to play according to her strategy is fine. It’s been stipulated that nothing is preventing them from following the strategy, and there are no other relevant limitations.

So it has to be the principle of free choice that goes. There will be a round such that one of the two perfectly rational players wants to follow her strategy, intends to follow it, can follow it in the sense that nothing is preventing her, yet doesn’t follow it. Strange.

## Is the axiom of choice a logical truth?

October 5, 2008

I actually think there are a bunch of related statements which we might think of as expressing choice principles. The most striking contrast is probably the set theoretic statement of choice, and the choice principle as it is stated in second order logic: $\forall R(\forall x \exists y Rxy \rightarrow \exists f \forall x Rxf(x))$. I want to argue that the second principle is a purely logical principle, unlike the first, despite the fact that the question of whether or not the latter is a logical truth seems to depend on the (ordinary) truth of the former.

Let’s start off with the set theoretic principle. I believe this is non logical. Note, however, that this is not because of the Gödel Cohen arguments – I think set-choice is a logical consequence of the second order ZF axioms, given SOL-choice. It is rather because the ZF axioms themselves are non logical. For example consider a model with three elements such that: $a \in b \in c$ – clearly c is a set of nonempty sets, but there isn’t a choice function for it because there aren’t any functions at all (that would require a set of set of set of sets.) Simply put: membership is not a logical constant, and so admits choice refuting interpretations. Note, I don’t mean to downplay the importance of the Gödel Cohen arguments; forcing and inner model theory are important tools in the epistemology of mathematics. Set-choice and CH may not be logically independent of the ZF axioms, but they do show us that, for all we are currently in a position to know, CH might be a logical consequence of second order ZF. It provides a method for showing epistemic independence and epistemic consistency, despite falling short of logical independence and consistency.

It might then be surprising to say that the second order choice principle is a logical truth. For following the Tarskian definition of logical truth for second order languages, i.e. truth in every set model, it follows that SOL-choice is a logical truth just in case set-choice is an ordinary truth (“true as a matter of fact”.) For example, if our metatheory was ZF+AD, SOL-choice would be neither a logical truth nor a logical falsehood!

I think this is to put the cart before the horse. Once the logical constants are a part of our metalanguage, then it is possible to do model theory in such a way that the non-logical fragment doesn’t affect the definitions of validity – indeed the non-logical component can be reserved purely for the syntax (see particularly, Rayo/Uzquiano/Williamson (RUW) style model theory.) So much the worse for Tarskian model theory.

But why think that SOL-choice is a logical truth or a logical falsehood, rather than neither? I guess I have three reasons for thinking this. Firstly, SOL-choice is stateable in almost purely logical vocabulary: Plural logic plus a pairing operation. While it is possible for it to fail under non-standard interpretations of the pairing function, it is enough to provide well orderings of many sets of interest: e.g. the plural theory of the real numbers gives us enough machinery for pairing, so well orderings under this encoding of pairs is possible. Secondly, SOL-choice is stateable in purely logical vocabulary. If you treat the binary quantifier “there are just as many F’s as G’s” as a logical quantifier, then you can state cardinal comparability in Plural logic+”there are just as many F’s as G’s” (which is certainly equivalent to choice in the ZF metatheory, I’m not sure what you need for this in the RUW setting.) I argued here that “there are just as many F’s as G’s” is a logical quantifier.

Lastly, imagine that we interpreted the second order quantifiers as ranging completely unrestrictedly over all pluralities there are. Suppose we still think that SOL-choice is not logically true or false. I.e. SOL-choice and it’s negation is logically consistent in the strong sense (not just that there are refuting Henkin models – that you can’t prove a contradiction from standard axioms.) Then there is a model in which SOL-choice is true, and a model in which it is false. But since our domain is everything, and the quantifiers in both models range over every plurality there is, the second order quantifier in the choice-satisfying model ranges over a choice function, which the second order quantifiers in the choice-refuting model must have missed. This is a contradiction, because we assumed that the quantifiers ranged over every plurality there is. Basically, choice-refuting models are missing things out. If there’s a choice interpretation and a ~choice interpretation for our unrestricted plural quantifiers, the choice model quantifiers range over more pluralities, in which case the ~choice model wasn’t really unrestricted after all. It seems then, that if SOL-choice is logically consistent, then it is logically true! (Note: this is kind of similar to the Sider argument against relativism about mereology. If there is an interpretation of our unrestricted quantifier that includes mereological fusions, and one that doesn’t, then the latter wasn’t really unrestricted after all.)

## Help! My credences are unmeasurable!

September 29, 2008

This is a brief follow up to the puzzle I posted a few days ago, and Kenny’s very insightful post and the comments to his post, where he answers a lot of the pressing questions to do with the probability and measurability of various events.

What I want to do here is just note a few probabilistic principles that get violated when you have unmeasurable credences (mostly a summary of what Kenny showed in the comments), and then say a few words about the use of the axiom of choice.

Reflection. Bas van Fraassens’ reflection principle states, informally, that if you are certain that your future credence in p will be x, then your current credence in p should be x (ignoring situations where you’re certain you’ll have a cognitive mishap, and the problems to do with self locating propositions.) If pn says “I will guess the n’th coin toss from the end correctly”, then Kenny shows, assuming translation invariance (that Cr(p)=Cr(q) if p can be gotten from q by uniformly flipping the values of tosses indexed by a fixed set of naturals for each sequence in q) that once we have chosen a strategy, but before the coins are flipped, there will be an n such that Cr(pn) will be unmeasurable (so fix n to be as such from now on.) However, given reasonable assumptions, no matter how the coins land before n, once you have learned that the coins have landed in such and such a way, Cr(pn)=1/2. Thus you may be certain that you will have credence 1/2 in pn even though you’re credence in pn is currently unmeasurable.

Conglomerability. This says that if you have some propositions, S, which are pairwise incompatible, but jointly exhaust the space, then if your credence in p conditional on each element of S is in an interval [a, b], then your unconditional credence in p should be in that interval. Kenny points out that conglomerability, as stated, is violated here too. The unconditional probability of pn is unmeasurable, but the conditional probability of pn on the outcome of each possible sequence up to n, is 1/2. (In this case, it is perhaps best to think of the conditional credence as what you’re credence would be after you have learned the outcome of the sequence up to n.) You can generate similar puzzles in more familiar settings. For example what should your credence be that a dart thrown at the real line will hit the Vitali set? Presumably it should be unmeasurable. However, conditional on each of the propositions $\mathbb{Q}+\alpha, \alpha \in \mathbb{R}$, which partition the reals, the probability should be zero – the probability of hitting exactly one point from countably many.

The Principal Principle. States, informally, that if you’re certain that the objective chance of p is x, then you should set your credence to x (provided you don’t have any ‘inadmissible’ evidence concerning p.) Intuitively, chances of simple physical scenarios like pn shouldn’t be unmeasurable. This turns out to be not so obvious. It is first worth noting that the argument that your credence in pn is unmeasurable doesn’t apply to the chance of pn, because there are physically possible worlds that are doxastically impossible for you (i.e. worlds where you don’t follow the chosen strategy at guess n.) Secondly, although the chance in a proposition can change over time, so it could technically be unmeasurable before any coin tosses, but 1/2 before the nth coin toss, the way that chances evolve is governed by the physics of the situation — the Schrodinger equation, or what have you. In the example we described we said nothing about the physics, but even so, it does seem like we can consistently stipulate that the chance of pn remains constant at 1/2. In such a scenario we would have a violation of the principal principle – before the tosses you can be certain that the chance of pn is 1/2, but your credence in pn is unmeasurable. (Of course, one could just take this to mean you can’t really be certain you’re going to follow a given strategy in a chancy universe – some things are beyond your control.)

Anyway, after telling some people this puzzle, and the related hats puzzle, a lot of people seemed to think that it was the axiom of choice that’s at fault. To evaluate that claim requires a lot of care, I think.

Usually to say the Axiom of Choice is false, is to say that there are sets which cannot be well ordered, or something equivalent. And presumably this depends on which structure accurately fits the extension of sethood and membership, the extension of which is partially determined by the linguistic practices of set theorists (much like ‘arthritis’ and ‘beech’, the extension of ‘membership’ cannot be primarily determined by usage of the ordinary man on the street.) After all there are many structures that satisfy even the relatively sophisticated axioms of first order ZF, only some of which satisfy the axiom of choice.

If it is this question that is being asked, then the answer is almost certainly: yes, the axiom of choice is true. The structure with which set theorists, and more generally mathematicians, are concerned with is one in which choice is true. (It’d be interesting to do a survey, but I think it is common practice in mathematics not to even mention that you’ve used choice in a proof. Note, it is a different question whether mathematicians think the axiom of choice is true – I’ve found often, especially when they realise they’re talking to a “philosophy” student, they’ll be suddenly become formalists.)

But I find it very hard to see how this answer has *any* bearing on the puzzle here. What structure best fits mathematical practice seems to have no implications whatsoever on whether it is possible for an idealised agent to adopt a certain strategy. This has rather to do with the nature of possibility, not sets. What possible scenarios are concretely realisable? For example, can there be a concretely realised agent whose mental state encodes the choice function on the relevant partition of sequences? (Where a choice function here needn’t be a set, but rather, quite literally, a physical arrangement of concrete objects.) Or another example: imagine a world with some number of epochs. In each epoch there is some number of people – all of them wearing green shirts. Is it possible that exactly one person in each epoch wears a red shirt instead? Surely the answer is yes, whether any person wears a red shirt or not is logically independent of whether the other people in the epoch wear a red shirt. A similar possibility can be guaranteed by Lewis’s principle of recombination – it is possible to arbitrarily delete bits of worlds. If so, it should be possible that exactly one of these people exists in each epoch. Or, suppose you have two collection of objects, A and B. Is it possible to physically arrange these objects into pairs such that either every A-thing is in one of the pairs, or every B-thing is in one of the pairs. Providing that there are possible worlds are large enough to contain big sets, it seems the answer again is yes. However, all of these modal claims correspond to some kind of choice principle.

Perhaps you’ll disagree about whether all of these scenarios are metaphysically possible. For example, can there be spacetimes large enough to contain all these objects? I think there is a natural class of spacetimes that can contain arbitrarily many objects – those constructed from ‘long lines’ (if $\alpha$ is an ordinal, a long line is $\alpha \times [0, 1)$ under the lexigraphic ordering, which behaves much like the positive reals, and can be used to construct large equivalents of $\mathbb{R}^4$.) Another route of justification might be the principle that if a proposition is mathematically consistent, in that it is true in some mathematical structure, that structure should have a metaphysically possible isomorph. Since Choice is certainly regarded to be mathematically consistent, if not true, one might have thought that the modal principles to get the puzzle of the ground should hold.