## Which things occupy the set role?

September 21, 2008Question: out of everything there is, which of those things are sets? A standard (platonist) answer would go something like the following: just those objects that are setty – those objects that have some special metaphysical property had only by sets. No doubt being setty involves being abstract, but presumably it involves something more – unless you’re a hardcore set theoretic reductionist there are non-setty abstract objects too.

I’ve been wondering about giving a more structuralist answer to this question: there is no primitive metaphysical property of being setty, rather, the sets are just whichever things happen to fill the set role. To get a rough idea, a model of set theory is just a relation R, (and by relation here I mean the things second order quantifiers range over.) Thus the set role is some third order property, F, which characterises the role the sets play. Since there will certainly be several relations satisfying the set role we have a choice: we can either ramsify or supervaluate. I prefer the supervaluational route here: it is (semantically) indeterminate whether the empty set is Julius Ceasar, but even so, it is supertrue that the empty set belongs to its singleton. More generally, a set theoretic statement is supertrue iff is true for every R satisfying F, superfalse iff … and so on as usual, where an admissible precisification of the membership relation is just any relation R that satisfies F.

[Of course there may be no relations satisfying the set role. But presumably this will only happen if there aren’t enough things. On the ontological side, I’m just imagining there only being concrete things, and the more worldly abstract objects such as properties. I’m not assuming that there are any mathematical objects, but I am assuming there is a wealth of properties including loads of modal properties and haecceities. I’m also assuming the use of full second order logic, which we can interpret in terms of plural quantification over pairs, where pairs are constructed from 0-ary properties (e.g. <*x*,*y*> = the proposition that *x* is taller than *y*.)]

Ok, all that was me setting things up the way I like to think about it. The real question I’m concerned with is: *what is the set role?*

Several obvious candidates come to mind. Perhaps the most natural is that it satisfies the axiom of second order set theory F(R) = ZFC(R), i.e. R satisfies second order replacement and a couple of other constraints. One nice thing about this on the supervaluational approach, if you assume there are enough things, is that you can retrieve a version of indefinite extensibility: however long the ordinals stretch there will be a precisification on which the ordinals stretch further. In general this depends on F – depending on F there may be a maximal precisification. Whether there is a maximal precisification, when F(R)=ZFC(R), depends on how many objects there are to begin with (e.g. indefinite extensibility holds if the number of things is an accessible limit of inaccessibles.)

The problem with this view is that if the number of things is at least the second inaccessible it will be indeterminate whether the number of sets is the first inaccessible, since there will be at least two precisifications. However, it shouldn’t be indeterminate whether the number of sets is the first inaccessible, it should be superfalse – the set theoretic hierarchy is much bigger than that! Perhaps we can tag some large cardinal property onto the end of F. For example, F(R) = “ZFC(R) & Ramsey(R)”, that is, an admissible precisification of membership is one that satisfies ZFC + there is a Ramsey cardinal/there is a supercompact cardinal/whatever… But this seems just as unsatisfactory as before – why does any one LC property in particular encode the practice of set theorists? What is more, for obvious reasons there are only countably many large cardinal properties we can define is second order logic, which gives rise to the following complaints: **(a)** we might expect the size of the set theoretic universe to be ineffable – i.e. that it’s cardinality is not definable by any second order formula and **(b)** if sethood is determined by the linguistic practices of mathematicians, presumably it must meet some constraints such as definability in some finite language. Sethood must be a concept graspable by beings of finite epistemic means.

So here is the view I’m toying with: the sets must be *maximal* in the domain. The height of the set theoretic universe is not fixed by some static large cardinal property. Rather it inflates to be as big as possible with respect to the domain. This leads to some rather nice consequences, which I’ll come to in a second. But first let’s work out what it means to be maximal. Let be the second order formula that says that R is isomorphic to an initial segment of S as a model of second order ZFC. Then just define

- .

Here are the nice things. Firstly, this is simple definable property. It also captures the intuition that the sets are ‘as big as they possibly could be’. But what is particularly interesting is that the height of the hierarchy is not some fixed cardinality – it varies depending on the size of the domain you start with. In particular, the height of the hierarchy varies from world to world. In a world where there are the first inaccessible number of things, the sets only goes up as high as the first inaccessible, but at worlds where there are more, the sets go higher. Add to this the following modal principle

- Necessarily, however many things there are, there’s possibly more.

and it seems like we can defend a version of indefinite extensibility. That the set theoretic hierarchy could always be extended higher, can be interpreted literally as metaphysical possibility.

Two things to iron out. Firstly note that it follows from some results due to Zermelo that any two maximal models are isomorphic. Thus there is at most one maximal model of ZFC up to isomorphism, so no set theoretic statements will come out indeterminate. Secondly, we need to know if there will always be a maximal model. Obviously, if there aren’t enough objects (less than the first inaccessible) there won’t be a maximal model, as there won’t be any models. However, I’m assuming there are a lot of objects. Certainly from just the regions of spacetime alone (which, by a forcing argument, is consistently enough objects for a model), but I’m also assuming there are lots of properties hanging about too.

More worryingly, however, there can fail to be a maximal model even when there are lots of objects. This was a possibility I, foolishly, hadn’t considered until I checked it. Let be the omega’th inaccessible. Obviously is not itself inaccessible because it’s not regular, and for every inaccessible less than it there is a larger one, thus for every model of ZFC there is a bigger one, and thus no maximal one.

So that’s a bit of a downer. But nonetheless, even if the size of the actual world is of an unfortunate cardinality, we can regain all the set theory by going to the modal language, where we have indefinite extensibility.

There’s a bit of a further worry here as well. Even if you require that the length of the ordinals be maximal, most large cardinal axioms will still turn out indeterminate. For instance, if there’s a precisification on which the sets happen to satisfy “there exists a measurable cardinal”, then we can take Gödel’s inner model L of this precisification, and that will also be a precisification that satisfies “there does not exist a measurable cardinal”.

Although admittedly, I haven’t really paid attention to whether both models can actually satisfy the full second-order replacement schema – thinking about it further, I now think that second-order replacement will give what John Burgess called “supertransitivity” (or something like that), where every subset of a model is also in the model, which I suppose would rule out anything L-like.

But an additional worry – by applying forcing to get Levy collapse, we can show that for every model of set theory, and any ordinal in this model, there is also a model of set theory in which that ordinal is countable. Again, I’m not sure how this interacts with second-order replacement, but in this case it’s hard to see how the second-order axiom will tell us which of the two is the “right model”.

by Kenny Easwaran September 22, 2008 at 4:02 amHi Kenny,

Thanks for the post! I would quibble a bit over your first claim that *most* large cardinal axioms would fail on the L precisicification if it were admissible. I think the example you gave is special in that it imposes constraints on the *width* as well as the height of the universe. I would say most large cardinal axioms don’t have this property, and depend only on the height, which *is* decided by maximality considerations.

But is L a model of second order ZFC? Actually it isn’t, for reasons similar to what you said – second order separation ensures that any subset of a member of the model is also in the model, which rules out L. (I’m not familiar with the term ‘supertransitive’ but it seems to me like supertransitivity is inconsistent – no set can be supertransitive as you defined it because of Cantors theorem?)

Regarding your second point, the forced models you referred to won’t be models of *second order* ZFC. As I noted in the post, if there is a maximal model at all, then any two maximal models are isomorphic. What is more revealing is the following due to Zermelo: every model of second order ZFC is of the form V_k where k is inaccessible. What this shows is that once you have fixed the height of the universe, ipso facto, you have fixed the model. Once we know the height, then every single set theoretic statement is determinate.

by Andrew September 22, 2008 at 12:30 pmHi Andrew,

Have you thought at all about the relation between your maximality axiom and the McGee-style urrelemente axiom—one that says that the urrelemente form a set. I think of that as a sort of maximality principle—most of the domain of everything whatsoever has to be occupied by sets otherwise there’ll be no set of all urrelemente.

Of course, you need unrestricted first-order quantifiers to get this going nicely. But it looks to me like your second order quantifiers need to be unrestricted to get the right result.

It seems to me that it’d be interesting to compare your discussion to discussion of McGee (e.g. to the categoricity results he proves)—potentially illumination both ways.

Just a couple of questions on the very last point: even if you shift to a modal language, have you any guarantee that there are any worlds where the cardinality of objects aren’t “unfortunate”? It looks to me there’ll be arbitrarily large unfortunate cardinalities, in which case the modal principle that just says possible cardinalities get unrestrictedly large won’t get you what you need.

Another thought. I remember Gabriel Uzquiano and Agustin Rayo had a paper where they combine the McGee style set theory with unrestricted classical extensional mereology and get trouble. The basic idea was that in each case, we can read off from the theories constraints on how many objects there (unrestrictedly) are—-and potentially get clashes between the constraints. In your terms (IIRC) the idea was to use the hypothesis of unrestricted classical mereo to show that the cardinality of things that exist must be “unfortunate” from the set-theoretic perspective. I can’t remember the details, however.

by Robbie September 23, 2008 at 9:06 amOh—just one more thing (since it came up recently in discussion of John Hawthorne’s paper at the Leeds ontology conference recently). How do you want to formulate the modal principle you mention: “Necessarily, however many things you have, there’re possibly more”? If we use possibilist quantification, it’s easy enough. But I found it a bit headspinning trying to find a satisfactory formulation in 2nd order QML—anyway, it seems that examining the resources you need to do it might be philosophically interesting.

E.g. I thought of:

The key move here is to quantify into the scope of the possibility operator with X. But can you really do this if e.g. you want to leave it open that the small world and the big world might have disjoint domains? There are also just basic questions about whether the values of second order variables are “rigid” in the right way to make sure we get the intended interpretation. So anyway, I was just wondering whether you had a particular formulation, and interpretation of the resources deployed in that formulation, in mind.

by Robbie September 23, 2008 at 9:19 amHi Robbie,

I had to formulate the claim before for something else I was doing actually. I essentially came up with what you just wrote and I remember being worried about similar things. (Although I wasn’t so bothered about rigidity for the second order variables – I was thinking of them in terms of plural quantification.)

The first worry is that the formula might be true in a model where every world has the same sized domain, but all the domains are disjoint so that X has cardinality 0 at every world except one.

One fix is to introduce a second order existence predicate and conjoin the statement that all of X’s members exists into the scope of the diamond. But then the formula comes out false in a model where there is countably many worlds with domains of cardinality 1, 2, 3…. but where each of the domains are disjoint. Intuitively we want it to come out true in such a model.

Obviously adopting a fixed domain semantics would get rid of the disjoint worlds problem, but would be devastating for our purposes.

I think the most satisfactory way is to drop the serious actualist semantics implicit in all of the above. Treat “bigger than” as a primitive, and allow it to hold between two second order variables at a world even if their referents aren’t subsets of the world we’re evaluating at. I.e. allow the extension of ‘bigger than’ at a world to be constructed out of arbitrary objects rather than objects that exist at the world – just as you would for non-serious Kripke semantics for first order QML.

by Andrew September 23, 2008 at 9:46 amOops, I managed to miss your first post when wrote the above.

The McGee stuff has been in the back of my mind actually. But I’m not sure how well McGee style ZFCU mixes with my conception of sets. For me, there isn’t really a fundamental difference between sets and urelements. You start off with the “urelelements”, i.e. non setty objects, and you supervaluate over models of set theoy you can construct out of them. No single model from an isomorphism class is priveliged. McGee style ZFCU fits better with the platonistic ‘setty’ conception of sets I described in the post, where there’s a fundamental ontological difference between sets and non sets. I suppose for a given collection of objects, X, it makes sense on my this view to talk about the set theory of X – supervaluate over maximal ZFCU models that take X as the urelements – but sometimes, e.g. if X is everything, this won’t generate a set theory.

BTW, one way the current view differs from McGee ZFCU, is that I think it is possible that there are classes of non-sets which aren’t set sized. Whereas the urelement axiom ensures this is impossible for McGee.

Regarding categoricity, I’m not sure if I do need unrestricted first order or second order quantification (although obviously I do need full second order quantification.) More precisely, I think the following is true: given the size of everything there is, there is, up to isomorphism, at most one maximal model of second order ZFC. So it’s categorical with respect to a given size of everything. However, I was thinking of my models as collections of ordinary Tarski ZFC models indexed to possible worlds – this is to make sense of the modal claims. With this you can do better I think: call a model of the kind just described *plenitudinous* iff every possible size there is, is the size of the domain of some world in that model. The maximal ZFC condition is catogorical over the plenitudinous models, modulo which world is actual (depending on if you include the actual world as part of the definition of the model.)

Does it matter if there are arbitrary large possible domains of unfortunate cardinalities? I can always restrict the modal operators to worlds that have maximal models of ZFC by adding a conditional (we had to do this anyway to exclude worlds that are too small.)

I really liked the Rayo Uzquiano paper. Actually this may be points in favour of this view over McGee! The problem for McGee was the size of everything must be inaccessible given the urelement axiom. But if classical mereology is true the size of everything must be 2^k for some k – that’s a straight forward contradiction since whenever k is less than an inaccessible 2^k is less than it too. However, on the current view the size of the universe is unconstrained (because of the “ordinary objects first, sets second” ethos.) Even better – I think classical mereology will ensure that the size of the universe won’t be “unfortunate” – I think that if there are enough atoms (at worse the first inaccessible, but it’s consistent that you only need aleph_0) then there is guaranteed to be a maximal model (I need to check this when I get time though.)

by Andrew September 24, 2008 at 12:18 pm“(I need to check this when I get time though.)”

Sorry, I got very confused :-(. That last sentence was nonsense.

by Andrew September 24, 2008 at 1:46 pm