## The Sorites paradox and non-standard models of arithmetic

December 16, 2008

A standard Sorites paradox might run as follows:

• 1 is small.
• For every n, if n is small then n+1 is small.
• There are non-small numbers.

On the face of it, these three principles are inconsistent, since the first two premisses entail that every number is small by the principle of induction. As far as I know, there is no theory of vagueness that gives us that these three sentences are true (and none of them false.) Nonetheless, it would be desirable if these sentences could be satisfied.

The principle of induction seems to do fine when we are dealing with precise concepts. Thus the induction schema for PA is fine, since it only says that it holds for properties definable in arithmetical vocabulary – all of which is precise. However, if we read the induction schema as open ended, that is, to hold even if we were to extend the language with new vocabulary, it is false. For it fails when we introduce into the language vague predicates.

The induction schema is usually proved by appealing to the fact that the naturals are well-ordered: every subset of the naturals has a least element. If the induction schema is going to fail if we allow vague sets, so should the well ordering principle. And that seems right: the set of large numbers doesn’t appear to have a least element – there is no first large number. So we have:

• The set of large numbers has no smallest member.

Again no theory I know of delivers this verdict. The best we get is with non classical logics, where it is at best vague whether there exists a least element of the set of large numbers.

Finally, I think we should also hold the following:

• For any particular number, n, you cannot assert that n is large.

That is, to assert of a given number, n, that it is large is to invite the Sorites paradox. You may assert that there exist large numbers, its just you can’t say exactly which they are. To assert that n is large, is to commit yourself to an inconsistency by standard Sorites reasoning, from n-1 true conditionals and the fact that 0 is not large.

The proposal I want to consider verifies all three of the bulletted points above. As it turns out, given a background of PA, the initial trio isn’t inconsistent after all. It’s merely $\omega$-inconsistent (given we’re not assuming open ended induction.) But this doesn’t strike me as a bad thing in the context of vagueness, since after all, you can go through each of the natural numbers and convince me its not large by Sorites reasoning, but that shouldn’t shake my belief that there are large numbers.

$\omega$-inconsistent theories are formally consistent with the PA axioms, and thus have models by Gödel’s completeness theorem. These are called non-standard models of arithmetic. They basically have all the sets of naturals the ordinary natural numbers have, except they admit more subsets of the naturals – they admit vague sets of natural numbers as well as the old precise sets. Intuitively this is right – when we only had precise sets we got into all sorts of trouble. We couldn’t even talk about the set of large numbers because it didn’t exist; it was a vague set.

What is interesting is that some of these new sets of natural numbers don’t have smallest members. In fact, the set of all non-standard elements is one of these sets, but there are many others. So my suggestion here is that the set of large numbers is one of these non-standard sets of naturals.

Finally, we don’t want to be able to assert that n is large, for any given n, since that would lead us to true contradiction (via a long series of conditionals.) The idea is we may assert that there are large numbers out there, but we just cannot say which ones. On first glance this might seem incoherent, however, it is just another case of $\omega$-inconsistency. $\{\neg Ln \mid n$ a numeral $\} \cup \{\exists x Lx\}$ is formally consistent. For example, this is satisfied in any non-standard model of PA with L interpreted as the set of non-standard elements.

How to make sense of all this? Well, the first thing to bear in mind is that the non-standard models of arithmetic are not to be taken too seriously. They show that the view in question is consistent, and are also a good guide to seeing what sentences are in fact true. For example in a non-standard model the second order universally quantified induction axiom is false, since the second order quantifiers range over vague sets, however the induction schema is true, provided it only allows instances of properties definable in the language of arithmetic (this is how the schema is usually stated) since those instances define only precise sets. We should not think of the non-standard models as accurate guides to reality, however, since they are constructed from purely precise sets, of the kind ZFC deals with. For example, the set of non-standard elements is a precise set being used to model a vague set. Furthermore, the non-standard models are described as having an initial segment which are the “real” natural numbers, and then a block of non-standard naturals coming after them. The intended model of our theory shouldn’t have these extra elements, it should have the same numbers, just with more sets of numbers, vague and precise ones.

Another question is, which non-standard model makes the right (second order) sentences true? Since there are only countably many naturals, we can add a second order sentence stating this to our theory (we’d have to check it still means the same thing once the quantifiers range over vague sets as well.) This would force the model to be countable. Call the first order sentences true in the standard model plus the second order sentence saying the universe is countable, plus the statements: (i) 0 is small, (ii) for every n, if in is small, n+1 is small and (iii) there are non small numbers, T. T is still consistent (by the Lowenheim-Skolem theorem), and I think this will uniquely pick out our model as $\mathbb{N} + \mathbb{Q}$ by a result from Skolem (I can’t quite remember the result right now, but maybe someone can correct me if its wrong.) This only gives us the interpretation for the second order quantifiers and the arithmetic vocabulary, obviously it won’t tell us how to interpret the vague vocabulary.

1. There is some interest in omega-inconsistency from the prospective of truth theory and paradox. Kripke showed how we could have a naive truth predicate in Strong Kleene, but we don’t get the T-biconditionals. So it’s natural to move to a Lukasiewicz logic. Curry paradox ensures we’ll need at least the rational interval [1, 0] as our semantic values. When we move to a quantified language we’ll need supremums and infimums of rationals, so we must go to the real interval [0, 1] as our semantic values. As it turns out, this language is paradox free, and contains all the T-biconditionals. That’s pretty nice.

Problem: Restall [http://www.logic.uconn.edu/readings/arithluk.pdf] showed that PA in this setting with a truth predicate is omega-inconsistent. (I guess also, some versions of the revision theory of truth are omega inconsistent as well.)

My point is that it’s not just the Sorites that might motivate an omega-inconsistent arithmetic; considerations from truth and truth-theoretic paradox can get you there too.

So I have some interest in the question: what’s so bad about omega-inconsistent arithmetic? I’ve heard some reasons:
1. Omega inconsistency doesn’t just ensure the existence of non-standard models (Lowenheim gave us that already for first-order), but it guarantees that there are *no* standard models.
2. I’ve heard mention that it screws up Goedel coding although I’m not sure why. (Hartry might mention something about this in “Saving Truth from Paradox” but I don’t have it on me.)
3. Omega inconsistency gives us a bad semantics for quantifiers. What exactly do our quantifiers mean if from $F(t)$ for each term t we can’t infer $\forall x F(x)$ (or vice versa)?

Any thoughts?

Adding the second order axiom to make the domain countable seems like a nice idea. But would it actually do it? If our 1st-order quantifiers are ranging over the weird stuff plus the naturals, presumably the 2nd-order quantifiers would have to range over all subsets of the the weird stuff and the naturals. Not sure how that would work out.

2. The problem with Godel numbering is that if you take the arithmetically expressible property “codes a well-formed formula”, then since there are infinitely many standard naturals that satisfy it, there must be a non-standard one that satisfies it. This means that we either have to say that the arithmetical property doesn’t mean what we think it means (that is, this non-standard number satisfies the property without coding a wff), or we have to admit that there are wffs that contain non-standard numbers of symbols from the alphabet, which is certainly strange.

I saw an interesting talk by a math grad student a year or two ago who described non-standard analysis by way of non-standard set theory and suggested it as a way to make sense of ultrafinitism – there are upper bounds to the set of all standard (actual) natural numbers, even though we can’t name any particular such upper bound. This seems to tie together nicely with the issue of vagueness, since the problem with ultrafinitism is that the upper bound really has to be vague.

3. Thank you both for the comments!

Aaron: also, I think Freidman-Sheard truth theory is omega-inconsistent for a similar reason to the truth theory with a Lukasiewicz valuation. (Because you can define things like “p is true, and its true that p is true, and its true that its true that p is true etc…”)

In terms of why this is bad: (1) in my setting it is bad that if you want the open ended T-schema, since if you add new a new predicate “non-standard(x)” it becomes (properly) inconsistent again. But I *do* want to do this so I can introduce vague vocabulary. (2) not only do you get non-standard formulae, but I think you get non-standard proofs too. So the provability predicate will be different; you might be able to prove more than you used to which I find super weird! (3) doesn’t hold in English, so I don’t really get that problem. Are you reading the quantifiers substitutionally?

BTW, the Field conditional does a lot better than Lukasiewicz in the truth setting – have you looked into that? It’s outlined in part III of his book I think.

The countability axiom will ensure the domain is countable in the sense that every ordinary set theoretic model will be countable (nothing is non-standard about the second order quantifiers – they still quantify over every subset. Remember I *want* to say that the numbers +the weird stuff together are countable.) What I was worried about is that the intended model isn’t among the ordinary set theoretic models. When we read the second order quantifiers as ranging over vague sets too it becomes less clear that standard formulations of countability express what I want them too.

4. Kenny: that’s pretty cool. I’ve actually been looking into seeing whether I could use semisets instead of the non-standard models, which might be related.

It all seems to be closely linked to ultrafinitism. The upshot of the Sorites paradox is essentially: sufficiently large numbers act as if they were infinite. (If n is less than a large number, so is n+1, etc…)

5. Sorry, I have this dumb habit of assuming we have a name for every element of the domain. Oops.

Yeah, I’ve looked into Field’s conditional. It’s pretty nice. But he has to move to a partially ordered set of values, to get it to work. I tend to think that the conditional in use in mathematics should have either $A\rightarrow B$ or $B \rightarrow A$ as true. But I don’t have any good reasons for thinking this. I think the neighborhood semantics he gives is a little odd, too. But as far as truth goes he’s got the most promising (consistent) proposal out there.

Kenny, thanks for explaining the coding thing. Never heard the reason, and never really tried to figure it out.

Last thing: ultrafinitism is badass philosophy!

6. Actually, in the setting above, I do want to say something like “everything has a name.” It’s just for any particular name, “n”, you can’t accept the sentence “n is large.” I guess this is another difference between the intended model and the non-standard models.

I think Field gets into trouble when he allows for $\omega_1$ length conjunctions. I laid out a general argument here: http://possiblyphilosophy.wordpress.com/2008/09/01/higher-order-vagueness-and-sharp-boundaries/

(BTW I don’t see why you need CEM for doing mathematics. The intuitionist’s don’t have it, and it used to be a pretty mainstream program in the philosophy of maths.)

7. [...] The Sorites paradox and non-standard models of arithmetic [...]