Archive for the ‘Philosophical Logic’ Category

h1

Cardinality and the intuitive notion of size

January 1, 2009

According to mathematicians two sets have the same size iff they can be put in one-one correspondence with one another. Call this Cantor’s principle:

  • CP: X and Y have the same size iff there is a bijection \sigma:X\rightarrow Y

Replace ‘size’ by ‘cardinality’ in the above and it looks like we have a definition: an analytic truth. As it stands, however, CP seems to be a conceptual analysis – or at the very least an extensionally equivalent charaterisation. In what follows I shall call the pretheoretic notion ‘size’ and the technical notion ‘cardinality. CP thus states that two sets have the same size iff they have the same cardinality.

Taken as a conceptual analysis of sizes of sets, as we ordinarily understand it, people often object. For example, according to this definition the natural numbers are the same size as the even numbers, and the same size as the square numbers, and many more sets even sparser than these. This is an objection to the right to left direction of CP.

I’m not inclined to give these intuitions too much weight. In fact, I think the intuitive principles behind these judgements are inconsistent. Here are two principles that seem to be at work: (i) if X is a proper subset of Y then X is smaller than Y, (ii) if by uniformly shifting X you get Y, then X and Y have the same size. For example (i) is appealed to when it’s argued that the set of evens is smaller than the set of naturals. (ii) is appealed to when people argue that the evens and the odds have the same size. Furthermore, both principles are solid when we are dealing with finite sets. However (i) and (ii) are clearly inconsistent. If the evens and the odds have the same size, so do the odds and the evens\{2}. This is just an application of (ii), but intuitively, the evens\{2} stand in exactly the same relation to the odds, as the odds to the evens. By transitivity, the evens and the evens\{2} are the same size – but this contradicts (i) since one is a proper subset of the other.

In fact Gödel gave a very convincing argument for the right to left direction: (a) changing the properties of the elements of a set does not change its size, (b) two sets which are completely indistinguishable have the same size and (c) if \sigma:X \rightarrow Y , each x \in X can morph its properties so that x and \sigma(x) are indistinguishable.  Thus, if \sigma is a bijection, X can be transformed in such a way that it is indiscernable from Y, and must have the same size. (Kenny has a good discussion of this at Antimeta.)

The direction of CP I think there is a genuine challenge to is the left to right. And without it, we cannot prove there is more than one infinite size! (That is, if we said every infinite set had the same size, that would be consistent with the right to left direction of CP alone.)

What I want to do here is justify the left to right direction of CP. The basic idea is to do with logical indiscernability. If two sets have the same size, I claim, they should be logically indiscernable in the following sense: any logical property had by one, is had by the other. Characterising the logical properties as the permutation invariant ones, we can see that if two sets have the same cardinality, then they are logically indiscernable. Since we accept the inference from having the same cardinality to having the same size, this partially confirms our claim.

But what about the full claim? If two sets have the same size, how can they be distinguished logically? There must be some logically relevant feature of the set which is distinguishing them, but has nothing to do with the size. But what could that possibly be? Surely size tells us everything we can know about a set without looking at the particular characteristics of  its elements (i.e. its non-logical properties.) If there is any natural notion of size at all, it must surely involve logical indiscernability.

The interesting thing is that if we have the principle that sameness in size entails logical indiscernability we get CP in full. The logical properties over the first layer of sets of the urelemente are just those sets invariant under all permutations of the urelemente. Logical properties of these sets are just unions of collections sets of the same size. Thus logically indiscernable sets are just sets with the same cardinality!

Ignore sets for a moment. The usual setting for permutation invariance tests is on the quantifiers. A variant of the above argument can be given. This time we assume that size quantifiers are maximally specific logical quantifiers. There are two ways of spelling this out, both of which will do:

  • For every logical quantifier, Q, Sx\phi \models Qx\phi or Sx\phi \models \neg Qx\phi
  • For every logical quantifier, Q, if Qx\phi \models Sx\phi then Qx\phi \equiv Sx\phi

The justification is exactly the same as before: the size of the \phi‘s tells us everything we can possibly know about the \phi‘s without looking at the particular characteristics of the individuals phi‘s – without looking at their non-logical properties. Since the cardinality quantifiers have this property too, we can show that every size quantifier is logically equivalent to some cardinality quantifier and vice versa.

I take this to be a strong reason to think that cardinality is the only natural notion of size on sets. That said, there’s still the possibility that the ordinary notion of size is simply underdetermined when it comes to infinite sets. Perhaps our linguistic practices do not determine a unique extension for expressions like ‘X is the same size as Y’ for certain X and Y. One thing to note is that the indeterminacy view seems to be motivated by our wavering intuitions about sizes. But as we saw earlier, a lot of these intuitions turn out to be inconsistent, so there won’t even exist precisifications of ‘size’ corresponding to these intuitions. On the other hand, if we are to think of the size of a set as the most specific thing we can say about that set, without appealing to the particular properties of its members, then there is a reason to think this uniquely picks out the cardinality precisification.

Advertisements
h1

The Sorites paradox and non-standard models of arithmetic

December 16, 2008

A standard Sorites paradox might run as follows:

  • 1 is small.
  • For every n, if n is small then n+1 is small.
  • There are non-small numbers.

On the face of it, these three principles are inconsistent, since the first two premisses entail that every number is small by the principle of induction. As far as I know, there is no theory of vagueness that gives us that these three sentences are true (and none of them false.) Nonetheless, it would be desirable if these sentences could be satisfied.

The principle of induction seems to do fine when we are dealing with precise concepts. Thus the induction schema for PA is fine, since it only says that it holds for properties definable in arithmetical vocabulary – all of which is precise. However, if we read the induction schema as open ended, that is, to hold even if we were to extend the language with new vocabulary, it is false. For it fails when we introduce into the language vague predicates.

The induction schema is usually proved by appealing to the fact that the naturals are well-ordered: every subset of the naturals has a least element. If the induction schema is going to fail if we allow vague sets, so should the well ordering principle. And that seems right: the set of large numbers doesn’t appear to have a least element – there is no first large number. So we have:

  • The set of large numbers has no smallest member.

Again no theory I know of delivers this verdict. The best we get is with non classical logics, where it is at best vague whether there exists a least element of the set of large numbers.

Finally, I think we should also hold the following:

  • For any particular number, n, you cannot assert that n is large.

That is, to assert of a given number, n, that it is large is to invite the Sorites paradox. You may assert that there exist large numbers, its just you can’t say exactly which they are. To assert that n is large, is to commit yourself to an inconsistency by standard Sorites reasoning, from n-1 true conditionals and the fact that 0 is not large.

The proposal I want to consider verifies all three of the bulletted points above. As it turns out, given a background of PA, the initial trio isn’t inconsistent after all. It’s merely \omegainconsistent (given we’re not assuming open ended induction.) But this doesn’t strike me as a bad thing in the context of vagueness, since after all, you can go through each of the natural numbers and convince me its not large by Sorites reasoning, but that shouldn’t shake my belief that there are large numbers.

\omega-inconsistent theories are formally consistent with the PA axioms, and thus have models by Gödel’s completeness theorem. These are called non-standard models of arithmetic. They basically have all the sets of naturals the ordinary natural numbers have, except they admit more subsets of the naturals – they admit vague sets of natural numbers as well as the old precise sets. Intuitively this is right – when we only had precise sets we got into all sorts of trouble. We couldn’t even talk about the set of large numbers because it didn’t exist; it was a vague set.

What is interesting is that some of these new sets of natural numbers don’t have smallest members. In fact, the set of all non-standard elements is one of these sets, but there are many others. So my suggestion here is that the set of large numbers is one of these non-standard sets of naturals.

Finally, we don’t want to be able to assert that n is large, for any given n, since that would lead us to true contradiction (via a long series of conditionals.) The idea is we may assert that there are large numbers out there, but we just cannot say which ones. On first glance this might seem incoherent, however, it is just another case of \omega-inconsistency. \{\neg Ln \mid n a numeral \} \cup \{\exists x Lx\} is formally consistent. For example, this is satisfied in any non-standard model of PA with L interpreted as the set of non-standard elements.

How to make sense of all this? Well, the first thing to bear in mind is that the non-standard models of arithmetic are not to be taken too seriously. They show that the view in question is consistent, and are also a good guide to seeing what sentences are in fact true. For example in a non-standard model the second order universally quantified induction axiom is false, since the second order quantifiers range over vague sets, however the induction schema is true, provided it only allows instances of properties definable in the language of arithmetic (this is how the schema is usually stated) since those instances define only precise sets. We should not think of the non-standard models as accurate guides to reality, however, since they are constructed from purely precise sets, of the kind ZFC deals with. For example, the set of non-standard elements is a precise set being used to model a vague set. Furthermore, the non-standard models are described as having an initial segment which are the “real” natural numbers, and then a block of non-standard naturals coming after them. The intended model of our theory shouldn’t have these extra elements, it should have the same numbers, just with more sets of numbers, vague and precise ones.

Another question is, which non-standard model makes the right (second order) sentences true? Since there are only countably many naturals, we can add a second order sentence stating this to our theory (we’d have to check it still means the same thing once the quantifiers range over vague sets as well.) This would force the model to be countable. Call the first order sentences true in the standard model plus the second order sentence saying the universe is countable, plus the statements: (i) 0 is small, (ii) for every n, if in is small, n+1 is small and (iii) there are non small numbers, T. T is still consistent (by the Lowenheim-Skolem theorem), and I think this will uniquely pick out our model as \mathbb{N} + \mathbb{Q} by a result from Skolem (I can’t quite remember the result right now, but maybe someone can correct me if its wrong.) This only gives us the interpretation for the second order quantifiers and the arithmetic vocabulary, obviously it won’t tell us how to interpret the vague vocabulary.

h1

Composition as identity, part II

December 11, 2008

Aside from Leibniz’s law, there are various other constraints identity must obey. For example, every object is identical with at most one thing, so every object presumably is identical* to at most one plurality. But here we have a disanalogy with the relation “x is the fusion of the yy’s”, for x is the fusion of many pluralities. If there may be more than one plurality *identical to x, then our notation for pluralizing x, x*, isn’t justified: * isn’t a function.

A way of sharpening this problem, pointed out by Jeff in the comments, is that you’d want (*)identity(*) to be transitive. For example, my upper and lower body are *identical to me, and I’m identical* to my arms, legs head and torso. Does that mean my lower and upper body are *identical* to my arms, legs, head and torso? How do you state that they’re different pluralities?

So there appear to be two ways you can go. One is to say that every object is really identical* to only one plurality, and the other is to say that every plurality is *identical to exactly one object. I’ll call the two approaches (a) unique decomposition, and (b) unique composition.

The first seems to be the most natural. By way of analogy, note that the pluralities, over a domain of objects, have many of the formal properties of mereology. (i) there’s no null fusion/no empty plurality, (ii) pluralities are closed under ‘unioning’ and ‘non-empty intersecting’ (fusion and products) (iii) they’re closed under complements (supplementation.)

In fact, they form a complete Boolean algebra under ‘subplurality’, and thus model the standard mereological axioms. However, there are some drawbacks. Firstly, you can form pluralities of mereological objects, in standard mereological theories (that’s how unrestricted composition is usually stated.) However, on this picture, you can’t, for it would amount to forming pluralities of pluralities – which is nonsense.

You might think this is not too much of a cost; after all, you can always talk about superpluralities when the standard mereologist talks of pluralities.

So this seems to answer our original problem, which was to ensure that many-one identity really associated each object with one plurality. What we have is unique decomposition: there is a unique plurality associated with each object, and that plurality fuses to that object (is *identical to it.) The way we have achieved unique decomposition in this case is by identifying xx with x’s atoms.

There may be other ways to achieve unique decomposition, but it seems they’ll all fall to the following problem. There are some situations where unique decomposition can’t be achieved, at least according to the standard mereologist. One of these is the possibility of a gunky world: a world where everything has a proper part. Formally, we have an atomless Boolean algebra. But if the points in our algebra are pluralities, what are they pluralities of? There cannot be any singleton pluralities, and if there can’t be singleton pluralities, there can’t be objects for there to be pluralities of.

[Side note: by Stone’s representation theorem, any gunky world a standard mereologist can conjure up may be represented by a ‘Henkin’ model of plural logic. Thus, you may feel like you’re in a gunky world – but only because your plural quantifiers are restricted. You’re failing to quantify over all pluralities (in particular, the singletons.)]

The other approach was what I labelled unique composition. Every plurality is *identical to exactly one object. In particular, the tables legs and surface are *identical to exactly one object, x, and the tables atoms are *identical to exactly one object, y. Since they two pluralities aren’t *identical*, neither is x and y. But now we should be worried: this seems to mean we must be able to uniquely assign one object to every plurality in the domain. Since we already have a condition for plurality identity, namely xx ^*\!\!=^* yy \leftrightarrow \forall z(z\prec xx \leftrightarrow z\prec yy) we get the following:

  • \forall x\forall y(x=y \leftrightarrow  \forall z(z\prec xx \leftrightarrow z\prec yy))

This is essentially Frege’s infamous Basic Law V, which entails that there is exactly one object. (Actually, in Frege’s logic it entailed a contradiction, but he allowed there to be empty pluralities.)

In Frege’s system you could derive this, via Russell’s paradox, and I (probably) haven’t written out enough axioms for you to be able to derive the paradox formally. But the problem is still there in the form of Cantor’s theorem, which says you cannot uniquely assign an object to each plurality.

(Note: I never said this, but \forall x can bind xx and vice versa.)

h1

Indeterminacy and knowledge

November 21, 2008

What do people think of this principle: determinate implication preserves indeterminacy? Formally1:

  • \Delta(p \rightarrow q) \rightarrow (\nabla p \rightarrow \nabla q)

If this principle is ok, and we accept that factivity of knowledge is determinate, it seems we can make trouble for the epistemicist, ignorance view of vagueness. That is, given:

  • \Delta(Kp \rightarrow p)

we can infer that \nabla p \rightarrow \nabla Kp: whenever p is indeterminate, it is indeterminate whether you know p. This, I take it, is incompatible with (determinate) ignorance concerning p.

[1 Note that, although this looks similar, it’s not quite the same as \Box (p \rightarrow q) \rightarrow (\Diamond p \rightarrow \Diamond q), which is a theorem of the weakest normal modal logic, K. \nabla and \Delta don’t stand in the same relation as \Diamond and \Box.]

h1

Is second order logic really first order?

November 6, 2008

Nowadays, I guess, a lot more people are sympathetic to the idea that second order logic is real logic than in Quine’s day due to the popularity of plural logic. However, this falls short of full second order logic by quite a long way due to the fact that it can’t quantify over relations. For example, you can’t state various facts about sizes or the axiom of choice.

In the first order case, the question seems to be more tractable. If we identify the logical vocabulary as those terms that are not sensitive to the particular identities of the individuals (i.e. whose extensions remain unchanged if you permute the domain) then we get the cardinality quantifiers and arbitrary unions of the cardinality quantifiers as logical terms. McGee confirms the intuition that these truly are logical by showing that the permutation invariant (first order) vocabulary are precisely those defineable from intuitively logical operations: negation, identity, arbitrary conjunctions, universal quantification with respect to an arbitrary block of variables. Admittedly, this language (L_{\infty, \infty}) is not a language that anyone can speak, but that is a deficiency on our part, and should not place constraints on logic. Thus, first order quantification seems to be ontologically innocent, even for quantifiers like ‘there are uncountably many F’s’.

Indeed, similar results hold if we allow second order quantifiers. They are also permutation invariant, and conversely, the permutation invariant second order quantifiers is precisely those that can be defined in the equivalent of L_{\infty, \infty} with arbitrary blocks of second order quantifiers too. But the difference here, it seems, is that it is not clear that second order quantification over relations is ontologically innocent. Sure, plural quantifiers are, but as soon as we leave the realm of monadic quantification there is less reason to think so (although some have suggested that you can get around it: e.g. Burgess, and Rayo and Linnebo.)

Anyway, I was wondering if it would be possible to reduce second order quantification to first order quantification in our infinitary language. If this were possible then we could happily use the second order quantifiers and safely know that the are not ontologically committing, because they are definable using first order vocabulary.

I think you can do it, but I’m not entirely sure so this might be wrong. Let \kappa be antizero – the size of everything. For each second order variable X of the language keep aside \kappa many variables: x_\alpha for \alpha \leq \kappa. Then define a translation schema as follows: [UPDATE: I reformulated it slightly so that it wasn’t quite so confusing.] For a subset of the domain, I, we define the translate of \phi with repsect to I as follows:

(Xx)^I \mapsto \bigvee_{\alpha \in I}x=x_\alpha

(\forall X \phi)^I \mapsto \forall x_1 \ldots x_\kappa(\forall y\bigvee_{\alpha \leq \kappa}x_\alpha=y \rightarrow \bigvee_{J \subseteq \kappa} (\phi)^J)

For the other connectives and quantifiers translation just commutes in the natural way. A couple of notes: this isn’t like L_{\infty, \infty} in that it must allow truly arbitrary disjunctions and quantifications (including proper class length conjunctions.) Secondly, it’s not really as simple a translation as it looks because in the first clause I left I “free”, to be later “bound” by an earlier application of the second translation clause. What this really means is that the length of the disjunction in the first clause is really determined by when it is called in the second clause. Lastly – that’s just monadic quantification, which we already had – but it seems it will extend nicely to polyadic second order quantifiers (this time we disjoin (x = x_\alpha \wedge y = y_\alpha) instead.)

h1

When do two objects touch?

October 7, 2008

I’ve been reading through Casati and Varzi’s “Parts and Places” recently. It’s a really fun book – I highly recommend it to anyone interested in the metaphysics of parthood and location.

Anyway, I have come to this very curious passage in their discussion of mereotopology. It has to do with the first “substantial” bridge principle relating parthood and connection (i.e. when two objects “touch”, or “kiss”.) They call it C8, which they write in their notation as:

  • Cxy \rightarrow \exists z(SCz \wedge Ozy \wedge Ozx \wedge \forall w(Pwz \rightarrow Owx \vee Owy))

which translates as: if x and y are touching, then there must be a selfconnected object which overlaps both x and y and is completely within their sum (i.e. all of its parts overlap one or the other.) A selfconnected object is one that can’t be divided into two objects which don’t touch (so, like what a bikini isn’t, but a mankini is.) The first thing that threw me was that they said the Cantor bar is an object such that none of its parts is selfconnected (thus by C8 cannot touch anything.) The Cantor bar, if you don’t already know, is constructed by taking a finite line, removing the middle third, then removing the middle thirds of the remaining two bars, et cetera et cetera. The limit of this process is just the intersection of all the stages. Now surely, if this is what they mean by the Cantor bar, this is just a fusion of points, each of which is selfconnected. So maybe I’m not understanding the example. But I’m actually finding it difficult to come up with an example of an object, none of whose parts are selfconnected. The best I can think of are slightly esoteric: (1) I guess a bilocated mereological atom, or (2) an extended simple which isn’t selfconnected. [I’m assuming that x touches y just in case x’s location touches y’s location. Or even stronger: x touches y in virtue of x’s location touching y’s location. Note how (2) is different from (1) in that (2) postulates an object that is singularly located at a disconnected location, whereas (1) is completely located at two selfconnected locations.]

The other curious thing was I can’t think of any reason whatsoever for Casati and Varzi to accept C8. For their preferred interpretation of ‘touching’ in ordinary space, is that x touches y if and only if either x’s closure overlaps y, or y’s closure overlaps x. But if you let x be the selfconnected block below, and let y be the fusion of the bars, then x and y are touching by this definition. However, clearly, there is no selfconnected object running from one to the other.

Maybe we should interpret ‘touches’ as path connected – i.e. that we can get from x to y by drawing a continuous line from some point in x to some point in y, without ever leaving the two. But I’m still not quite sure if that’s right.

This is a sin(1/x) type curve. The fusion of the block and the sin wave appears to be self connected, however there cannot be a continuous line running from the block to the curve. This violates the converse of C8, which I take it is supposed to be true too (I think they took this direction to be obvious.) So path connected doesn’t seem to be the right definition of ‘touching’ either.

A couple of things to note about this example. Firstly, if the amplitude of the sine wave gets smaller as it approaches the block, then they are path connected, because you can draw a continuous line between them. But, intuitively, there shouldn’t be a difference between the two examples – they should both count as touching each other.

Here’s something I find a bit weird. Suppose further that the amplitudes of the peaks, starting from the left to the right, is 1/2, 1/4, 1/8 … 1/(2^n) … . Then the informal explanation of path connectedness applies: you can draw a line between the two without taking your pencil off the page. But if the amplitudes of the peaks go 1/2, 1/3, 1/4, 1/5 … 1/n … etc. then the informal explanation breaks down. The distance you have to travel to get from the curve to the block is infinite! Even if you have a giant novelty pencil that extends off to infinity in one direction, your pencil will still run out before you get to anywhere properly inside the block. Yet they are still path connected because this line is still continuous.

One more cool fact. Another plausible definition of ‘touching’ is that x touches y iff their closures overlap. However, Frank Arntzenius pointed out to me that on this interpretation of touching, the four colour theorem comes out false! (Just draw the sin like curves above with a rainbow paintbrush to see how.)

h1

Is the axiom of choice a logical truth?

October 5, 2008

I actually think there are a bunch of related statements which we might think of as expressing choice principles. The most striking contrast is probably the set theoretic statement of choice, and the choice principle as it is stated in second order logic: \forall R(\forall x \exists y Rxy \rightarrow \exists f \forall x Rxf(x)). I want to argue that the second principle is a purely logical principle, unlike the first, despite the fact that the question of whether or not the latter is a logical truth seems to depend on the (ordinary) truth of the former.

Let’s start off with the set theoretic principle. I believe this is non logical. Note, however, that this is not because of the Gödel Cohen arguments – I think set-choice is a logical consequence of the second order ZF axioms, given SOL-choice. It is rather because the ZF axioms themselves are non logical. For example consider a model with three elements such that: a \in b \in c – clearly c is a set of nonempty sets, but there isn’t a choice function for it because there aren’t any functions at all (that would require a set of set of set of sets.) Simply put: membership is not a logical constant, and so admits choice refuting interpretations. Note, I don’t mean to downplay the importance of the Gödel Cohen arguments; forcing and inner model theory are important tools in the epistemology of mathematics. Set-choice and CH may not be logically independent of the ZF axioms, but they do show us that, for all we are currently in a position to know, CH might be a logical consequence of second order ZF. It provides a method for showing epistemic independence and epistemic consistency, despite falling short of logical independence and consistency.

It might then be surprising to say that the second order choice principle is a logical truth. For following the Tarskian definition of logical truth for second order languages, i.e. truth in every set model, it follows that SOL-choice is a logical truth just in case set-choice is an ordinary truth (“true as a matter of fact”.) For example, if our metatheory was ZF+AD, SOL-choice would be neither a logical truth nor a logical falsehood!

I think this is to put the cart before the horse. Once the logical constants are a part of our metalanguage, then it is possible to do model theory in such a way that the non-logical fragment doesn’t affect the definitions of validity – indeed the non-logical component can be reserved purely for the syntax (see particularly, Rayo/Uzquiano/Williamson (RUW) style model theory.) So much the worse for Tarskian model theory.

But why think that SOL-choice is a logical truth or a logical falsehood, rather than neither? I guess I have three reasons for thinking this. Firstly, SOL-choice is stateable in almost purely logical vocabulary: Plural logic plus a pairing operation. While it is possible for it to fail under non-standard interpretations of the pairing function, it is enough to provide well orderings of many sets of interest: e.g. the plural theory of the real numbers gives us enough machinery for pairing, so well orderings under this encoding of pairs is possible. Secondly, SOL-choice is stateable in purely logical vocabulary. If you treat the binary quantifier “there are just as many F’s as G’s” as a logical quantifier, then you can state cardinal comparability in Plural logic+”there are just as many F’s as G’s” (which is certainly equivalent to choice in the ZF metatheory, I’m not sure what you need for this in the RUW setting.) I argued here that “there are just as many F’s as G’s” is a logical quantifier.

Lastly, imagine that we interpreted the second order quantifiers as ranging completely unrestrictedly over all pluralities there are. Suppose we still think that SOL-choice is not logically true or false. I.e. SOL-choice and it’s negation is logically consistent in the strong sense (not just that there are refuting Henkin models – that you can’t prove a contradiction from standard axioms.) Then there is a model in which SOL-choice is true, and a model in which it is false. But since our domain is everything, and the quantifiers in both models range over every plurality there is, the second order quantifier in the choice-satisfying model ranges over a choice function, which the second order quantifiers in the choice-refuting model must have missed. This is a contradiction, because we assumed that the quantifiers ranged over every plurality there is. Basically, choice-refuting models are missing things out. If there’s a choice interpretation and a ~choice interpretation for our unrestricted plural quantifiers, the choice model quantifiers range over more pluralities, in which case the ~choice model wasn’t really unrestricted after all. It seems then, that if SOL-choice is logically consistent, then it is logically true! (Note: this is kind of similar to the Sider argument against relativism about mereology. If there is an interpretation of our unrestricted quantifier that includes mereological fusions, and one that doesn’t, then the latter wasn’t really unrestricted after all.)