Is ZFC Arithmetically Sound?

February 12, 2010

I recently stumbled across this fascinating discussion on FOM. The question at stake: why should we believe that ZFC doesn’t prove any false statements about numbers? That is, while of course we should believe ZFC is consistent and $\omega$-consistent, that is no reason to expect it not to prove false things: perhaps even false things about numbers that we could, in some sense, verify.

Of course – the “in some sense” is important, as Harvey Friedman stressed in one of the later posts. After all ZFC can prove everything PA can, so whatever the false consequences of ZFC are, we couldn’t prove them from PA. There were a number of interesting suggestions. For example it might prove the negation of something we have lots of evidence for (e.g. something like Goldbach’s conjecture where we have verified lots of its instances – except unlike GC it can’t be $\Pi^0_1$.) Or perhaps it would prove there was some Turing machine that would halt, but which never would if we were to make it. There’s a clear sense that it’s false that the TM halts, even though we can’t verify it conclusively.

Anyway, while reading all this I became quite a lot less sure about some things I used to be pretty certain about. In particular, a view I had never thought worth serious consideration: the view that there isn’t a determinate notion of being ‘arithmetically sound’. Or more transparently, the view that there’s no such thing as *the* standard model of arithmetic, i.e. there are lots of equally good candidate structures for the natural numbers, and that there’s no determinate notion of true and false for arithmetical statements. Now I have given it fair consideration I’m actually beginning to be swayed by it. (Note: this is not to say I don’t think there’s a matter of fact about statements concerning certain physical things like the ordering of yearly events in time, or whether a physical Turing machine will eventually halt. It’s just I think this could turn out to be contingent. It’ll depend, I’m guessing, on the structure of time and the structure of space in which the machine tape is embedded. Thus, on this view, arithmetic is like geometry – there is no determinate notion of true-for-geometry, but there is is a determinate notion of true of the geometry of our spacetime, which actually turns out to be a weird geometry.)

Something that would greatly increase my credence in this view would be if we could find a pair of “mysterious axioms”, (MA1) and (MA2), which had the following properties. (a) they are like the continuum hypothesis, (CH), in that they are independent of our currently accepted set theory, say ZFC plus large cardinals, and, like (CH), it is unclear how things would have to be for it to be true or false. (b) unlike (CH) and its negation, (MA1) and (MA2) its negation disagree about some arithmetical statement.

Let me first say a bit more about (a). On some days of the week I doubt there are any sets, or that there are as many things as there would need to be for there to be sets. However I believe in plural quantification, and believe that if there *were* enough things, then we could generate models for ZFC just by considering pluralities of ordered pairs. But even given all that I don’t think I know what things would have to be like for (CH) to be true. If there is a plurality of ordered pairs that satisfies ZF(C), then there is one that satisfies ZFC+CH, namely Gödel’s constructible universe, and also one that doesn’t satisfy CH. So even given we have all these objects, it is not clear which relation should represent membership between them. I can only think of two reasons to think there is a preferred relation: (1) if there were a perfectly natural relation, membership, between these objects which somehow set theorists are able latch onto and intuit things about from their armchair or (2) there is only one such relation (up to isomorphism anyway) compatible with the linguistic practices of set theorists. Neither of these seem particularly plausible to me.

Now let me say a bit about (b). Note firstly that Con(ZFC) is an arithmetical statement independent of ZFC. However it is not like (CH) in that we have good reason to believe its negation is false. And more to the point, its negation is inconsistent with there being any inaccessibles. (MA) is going to have to be subtler than that.

It is also instructive to consider the following argument that ZFC *is* arithmetically sound. Suppose it’s determinate that there’s an inaccessible (a reasonable assumption, if we grant there are enough things, and that the truth of these claims are partially fixed by the practices of set theorists.) Let $\kappa$ be the first one. Then $V_\kappa$ is a model for ZFC which models every true arithmetical statement (because the natural numbers are an initial segment of $\kappa$ [edit: and arithmetical statements are absolute].) So ZFC cannot prove any false arithmetical statement. That is, determinately, ZFC is arithmetically sound. And all we’ve assumed is that it’s determinate that there’s an inaccessible.

Now I find this argument convincing. But clearly this doesn’t prove that every arithmetic statement is determinate. All it shows is that arithmetic is determinate if ZFC is. But (CH) has already brought the antecedent into doubt! So although $V_\kappa$ determinately decides every arithmetical statement correctly, it is still indeterminate what $V_\kappa$ makes true. That is, both (MA1) and (MA2) disagree not only over some arithmetical statement, but also over whether $V_\kappa$ makes that statement true.

Now maybe there isn’t anything like (MA1/2). Maybe we will always be able to find a clear reason to accept or reject any set theoretic statement that has consequences for arithmetic. But I see absolutely no good reason to think that there won’t be anything like (MA1/2). To make it more vivid, there are these really really weird results from Harvey Friedman showing that simple combinatorial principles about numbers imply all kinds of immensely strong things about large cardinals. While these simple principles about numbers look determinate they imply highly sophisticated principles that are independent of ZFC. I see no reason why someone might not find a simple number theoretic principle that implies another continuum hypothesis type statement. And in the absence of face value platonism – *a lot* of objects, and a uniquely preferred membership (perhaps natural) relation between them – it is hard to think how these statements could be determinate.

Which things occupy the set role?

September 21, 2008

Question: out of everything there is, which of those things are sets? A standard (platonist) answer would go something like the following: just those objects that are setty – those objects that have some special metaphysical property had only by sets. No doubt being setty involves being abstract, but presumably it involves something more – unless you’re a hardcore set theoretic reductionist there are non-setty abstract objects too.

I’ve been wondering about giving a more structuralist answer to this question: there is no primitive metaphysical property of being setty, rather, the sets are just whichever things happen to fill the set role. To get a rough idea, a model of set theory is just a relation R, (and by relation here I mean the things second order quantifiers range over.) Thus the set role is some third order property, F, which characterises the role the sets play. Since there will certainly be several relations satisfying the set role we have a choice: we can either ramsify or supervaluate. I prefer the supervaluational route here: it is (semantically) indeterminate whether the empty set is Julius Ceasar, but even so, it is supertrue that the empty set belongs to its singleton. More generally, a set theoretic statement $\phi(\in)$ is supertrue iff $\phi(R)$ is true for every R satisfying F, superfalse iff … and so on as usual, where an admissible precisification of the membership relation is just any relation R that satisfies F.

[Of course there may be no relations satisfying the set role. But presumably this will only happen if there aren’t enough things. On the ontological side, I’m just imagining there only being concrete things, and the more worldly abstract objects such as properties. I’m not assuming that there are any mathematical objects, but I am assuming there is a wealth of properties including loads of modal properties and haecceities. I’m also assuming the use of full second order logic, which we can interpret in terms of plural quantification over pairs, where pairs are constructed from 0-ary properties (e.g. <x,y> = the proposition that x is taller than y.)]

Ok, all that was me setting things up the way I like to think about it. The real question I’m concerned with is: what is the set role?

Several obvious candidates come to mind. Perhaps the most natural is that it satisfies the axiom of second order set theory F(R) = ZFC(R), i.e. R satisfies second order replacement and a couple of other constraints. One nice thing about this on the supervaluational approach, if you assume there are enough things, is that you can retrieve a version of indefinite extensibility: however long the ordinals stretch there will be a precisification on which the ordinals stretch further. In general this depends on F – depending on F there may be a maximal precisification. Whether there is a maximal precisification, when F(R)=ZFC(R), depends on how many objects there are to begin with (e.g. indefinite extensibility holds if the number of things is an accessible limit of inaccessibles.)

The problem with this view is that if the number of things is at least the second inaccessible it will be indeterminate whether the number of sets is the first inaccessible, since there will be at least two precisifications. However, it shouldn’t be indeterminate whether the number of sets is the first inaccessible, it should be superfalse – the set theoretic hierarchy is much bigger than that! Perhaps we can tag some large cardinal property onto the end of F. For example, F(R) = “ZFC(R) & Ramsey(R)”, that is, an admissible precisification of membership is one that satisfies ZFC + there is a Ramsey cardinal/there is a supercompact cardinal/whatever… But this seems just as unsatisfactory as before – why does any one LC property in particular encode the practice of set theorists? What is more, for obvious reasons there are only countably many large cardinal properties we can define is second order logic, which gives rise to the following complaints: (a) we might expect the size of the set theoretic universe to be ineffable – i.e. that it’s cardinality is not definable by any second order formula and (b) if sethood is determined by the linguistic practices of mathematicians, presumably it must meet some constraints such as definability in some finite language. Sethood must be a concept graspable by beings of finite epistemic means.

So here is the view I’m toying with: the sets must be maximal in the domain. The height of the set theoretic universe is not fixed by some static large cardinal property. Rather it inflates to be as big as possible with respect to the domain. This leads to some rather nice consequences, which I’ll come to in a second. But first let’s work out what it means to be maximal. Let $R \leq S$ be the second order formula that says that R is isomorphic to an initial segment of S as a model of second order ZFC. Then just define

• $F(R) := ZFC(R) \wedge \forall S(R \leq S \rightarrow S \leq R)$.

Here are the nice things. Firstly, this is simple definable property. It also captures the intuition that the sets are ‘as big as they possibly could be’. But what is particularly interesting is that the height of the hierarchy is not some fixed cardinality – it varies depending on the size of the domain you start with. In particular, the height of the hierarchy varies from world to world. In a world where there are the first inaccessible number of things, the sets only goes up as high as the first inaccessible, but at worlds where there are more, the sets go higher. Add to this the following modal principle

• Necessarily, however many things there are, there’s possibly more.

and it seems like we can defend a version of indefinite extensibility. That the set theoretic hierarchy could always be extended higher, can be interpreted literally as metaphysical possibility.

Two things to iron out. Firstly note that it follows from some results due to Zermelo that any two maximal models are isomorphic. Thus there is at most one maximal model of ZFC up to isomorphism, so no set theoretic statements will come out indeterminate. Secondly, we need to know if there will always be a maximal model. Obviously, if there aren’t enough objects (less than the first inaccessible) there won’t be a maximal model, as there won’t be any models. However, I’m assuming there are a lot of objects. Certainly $2^{2^{\aleph_0}}$ from just the regions of spacetime alone (which, by a forcing argument, is consistently enough objects for a model), but I’m also assuming there are lots of properties hanging about too.

More worryingly, however, there can fail to be a maximal model even when there are lots of objects. This was a possibility I, foolishly, hadn’t considered until I checked it. Let $\kappa_\omega$ be the omega’th inaccessible. Obviously $\kappa_\omega$ is not itself inaccessible because it’s not regular, and for every inaccessible less than it there is a larger one, thus for every model of ZFC there is a bigger one, and thus no maximal one.

So that’s a bit of a downer. But nonetheless, even if the size of the actual world is of an unfortunate cardinality, we can regain all the set theory by going to the modal language, where we have indefinite extensibility.