Posts Tagged ‘Higher Order Vagueness’

h1

B entails that a conjunction of determinate truths is determinate

October 26, 2010

I know it’s been quiet for a while around here. I have finally finished a paper on higher order vagueness which has beenĀ  a long time coming, and since I expect it to be in review for quite a while longer I decided to put it online. (Note: I’ll come back to the title of this post in a bit, after I’ve filled in some of the background.)

The paper is concerned with a number of arguments that purport to show that it is always a precise matter whether something is determinate at every finite order. This would entail, for example, that it was always a precise matter whether someone was determinately a child at every order, and thus, presumably, that this is also a knowable matter. But it seems just as bad to be able to know things like “I stopped being a determinate child at every order after 123098309851248 nanoseconds from my birth” as to know the corresponding kinds of things about being a child.

What could the premisses be that give such a paradoxical conclusion? One of the principles, distributivity, says that a (possibly infinite) conjunction of determinate truths is determinate, the other, B, says p \rightarrow \Delta\neg\Delta\neg p. If \Delta^* p is the conjunction of p, \Delta p, \Delta\Delta p, and so on, distributivity easily gives us (1) \Delta^*p \rightarrow\Delta\Delta^* p. Given a logic of K for determinacy we quickly get \Delta\neg\Delta\Delta^*p \rightarrow\Delta\neg\Delta^* p, which combined with \neg\Delta^* p\rightarrow \Delta\neg\Delta\Delta^* p (an instance of B) gives (2) \neg\Delta^* p\rightarrow\Delta\neg\Delta^* p. Excluded middle and (1) and (2) gives us \Delta\Delta^* p \vee \Delta\neg\Delta^* p, which is the bad conclusion.

In the paper I argue that B is the culprit.* The main moving part in Field’s solution to this problem, by contrast, is the rejection of distributivity. I think I finally have a conclusive argument that it is B that is responsible, and that is that B actually *entails* distributivity! In other words, no matter how you block the paradox you’ve got to deny B.

I think this is quite surprising and the argument is quite cute, so I’ve written it up in a note. I’ve put it in a pdf rather than post it up here, but it’s only two pages and the argument is actually only a few lines. Comments would be very welcome.

* Actually a whole chain of principles weaker than B can cause problems, the weakest which I consider being \Delta(p\rightarrow\Delta p)\rightarrow(\neg p \rightarrow \Delta\neg p), which corresponds to the frame condition: if x can see y, there is a finite chain of steps from y back to x each step of which x can see.

Advertisements
h1

Higher Order Vagueness and Sharp Boundaries

September 1, 2008

One of the driving intuitions that motivates the rejection of bivalence, in the context of vagueness, is the intuition that there are no sharp cut off points. There is no number of nanoseconds such that anything living that long is young, but anything older would cease to be young (a supervaluationist will have to qualify this further, but something similar can be said for them too.) The thought is that the meaning determining factors, such as the way we use language, simply cannot determine such precise boundaries. Presumably there are many different precise interpretation of language that are compatible with our usage, and the other relevant factors.

The intuition extends further. Surely there is no sharp cut off point between being young, and being borderline young (and between being borderline and being not young.) There are borderline bordeline cases. And similarly there shouldn’t be sharp cut off points between youngness, and borderline borderline youngness etc… Thus there should be all kinds of orders of vagueness – at each level we escape sharp cut off points by positing higher levels of vagueness.

This line of thought is initially attractive, but it has its limitations. Surely there must be sharp cut off points between being truly young and failing to be truly young – where failing to be young includes being borderline young, or borderline borderline young, and so on. Basically, failing to be true involves anything less than full truth.

Talk of ‘full truth’ has to be taken with a pinch of salt. We have moved from talking about precise boundaries in the object language, to metatheoretical talk about truth values. This assumes that those who reject sharp boundaries identify the intended models with the kinds of many truth valued models used to characterise validity (my thinking on this was cleared up a lot by these helpful two posts by Robbie Williams.) Timothy Williamson offers this neat argument that we’ll be committed to sharp boundaries either way, and it can be couched purely in the object language. Suppose we have an operator, \Delta p, which says that p is determinately true. In the presence of higher order vagueness, we may have indeterminacy about whether p is determinate, and we may have indeterminacy about whether p is not determinate. I.e. \Delta fails to be governed by the S4 and S5 axioms respectively. However, we can introduce the following operator, which is supposed to represent our notion of being absolutely determinately true as the following infinite conjunction:

  • \Delta^\omega p := p \wedge \Delta p \wedge \Delta\Delta p \wedge \ldots

We assume one fact about \Delta. Namely: that an arbitrary conjunction of determinate truths is also determinate (we actually only need the claim for countable conjunctions.)

  • \Delta p_1 \wedge \Delta p_2 \wedge \ldots \equiv \Delta (p_1 \wedge p_2 \wedge \ldots )

From this we can deduce that \Delta^\omega p obeys S5 S4 [edit: only if you assume that \Delta obeys the B princple do we get S5. Thanks to Brian for correcting this] (its exactly the same way you show that common knowledge obeys S4, if you know that proof.) If \Delta^\omega p holds, then we have \Delta p \wedge \Delta\Delta p \wedge \Delta\Delta\Delta p \wedge \ldots by conjunction elimination, and by the definition of \Delta^\omega. By the assumed fact, this is equivalent to \Delta(p \wedge \Delta p \wedge \ldots ) which by definition is \Delta(\Delta^\omega) p. We then just iterate this, to get each finite iteration \Delta^n\Delta^\omega p and collect them together using an infinitary conjunction introduction rule to got \Delta^\omega\Delta^\omega p.

But what of the assumption that a conjunction of determinate truths is determinate? It seems pretty uncontroversial. In the finite case its obvious. If A and B are determinate, how can (A ^ B) fail to be? Where would the vagueness come from? Not A and B, by stipulation, and presumably not from ^ – it’s a logical constant after all. The infinitary case seems equally sound. However – the assumption is not completely beyond doubt. For example in some recent papers Field has to reject the principle. On his model theory, p is definitely true if a sequence of three valued interpretation eventually converges to 1 – after some point it is 1’s all the way. Now its possible for each conjunct, of an infinite conjunction, to all converge on 1, but that there is no point along the sequence such that all of them are 1 from then on.

I think we can actually do away with Williamsons’ assumption – all we need is factivity for \Delta.

  • \Delta p \rightarrow p

To state the idea we’re going to have to make some standard assumptions about the structure of the truth values sentences can take. We’ll then look at how to eliminate reference to truth values all together, so we do not beg the question against those who just take the many valued semantics as a way to characterise validity and consequence relations. The assumption is that the set of truth values is a complete lattice, and that conjunction, disjunction, negation, etc… all take the obvious interpretations: meet, join, complement and so on. They form a lattice because conjunction, negation etc… must retain their characteristic features, and the lattice is complete because we can take arbitrary conjunctions and disjunctions. As far as I know, this assumption is true of all the major contenders: the three valued logic of Tye, continuum valued logic of Lukasiewicz’, supervaluationism (where elements in the lattice are just sets of precisifications), intuitionism and, of course, bivalent theories are all complete lattices.

In this framework the interpretation of \Delta will just be a function, \delta, from the lattice to itself and the sharp boundaries hypothesis corresponds to obtaining a fixpoint of this function for each element by iterating \delta. There are a variety of fixpoint theorems available for continuous functions (these are obtained by iterating \omega many times – indeed the infinitary distributivity of \Delta in the Williamson proof is just the assumption of continuity for \delta.) However, it is also true that every monotonic function will have fixpoint that is obtained by iterating – the only difference is that we will have to iterate for longer than omega. (Note that our assumption of factivity for \Delta ensures monotonicity of \delta – i.e. that \delta (x) \sqsubseteq x.) Define \delta^\alpha for ordinals alpha recursively as follows:

  • \delta^0(a) := a
  • \delta^{\alpha + 1}(a) := \delta(\delta^\alpha(a))
  • \delta^\gamma (a) : = \sqcap_{\alpha < \gamma}\delta^\alpha(a)

(We can define \Delta^\alpha analogously if we assume a language which allows for arbitrary conjunctions.) Fix a lattice, B, and let \kappa := sup\{ \alpha \mid \mbox{there is an } \alpha \mbox{ length chain in B}\} where a chain is simply a subset of B that is linearly ordered. It is easy to see that \delta(\delta^\kappa(a)) = \delta^\kappa(a) for every a in B, and thus that \delta^\kappa(\delta^\kappa(a)) = \delta^\kappa(a). This ensures the S4 axiom for \Delta^\kappa

  • \Delta^\kappa p \rightarrow \Delta^\kappa\Delta^\kappa p

We can show the S5 axiom too.

Note that trivially, every (weakly) monotonic function on a lattice has a fixpoint: the top/bottom element. The crucial point of the above is that we are able to define the fixpoint in the language. Secondly, note also that \kappa depends on the size of the lattice in question – which allows us to calculate \kappa for most of the popular theories in the literature. For bivalent and n-valent logics its finite. For continuum valued fuzzy logics the longest chain is just 2^{\aleph_0}. Supervaluationism is a bit harder. Precisifications are just functions from a countable set of predicates to sets of objects. Lewis estimates the number of objects is at most \beth_2, making the number of extensions \beth_3. Obviously \beth_3^{\aleph_0} = \beth_3 so the longest chains are at most \beth_4 (and I’m pretty sure you can always get chains that long.)

All the theories, except possibly supervaluationism, define validity with respect to a fixed lattice. Things get a bit more complicated if you define validity with respect to a class of lattices. Since the class might have arbitrarily largeĀ  lattices, there’s no guarantee we can find a big enough \kappa. But that said, I think we could instead introduce a new operator, \Delta^* p := \forall \alpha \in On \Delta^\alpha p which would do the trick instead (I know, I’m being careless here with notation – but I’m pretty sure it could be made rigorous.)

Final note: what if you take the role of the many valued lattice models to be a way to characterise validity rather than as a semantics for the language? For example – couldn’t you adopt a continuum valued logic, but withhold identifying the values in [0,1] with truth values. The worry still remains however. Let c := 2^{\aleph_0}. Since \Delta^c p \rightarrow \Delta^c\Delta^c p is true in all continuum valued models, it is valid. Valid gives us true, so we end up with sharp boundaries in the intended model – even though the intended model needn’t look anything like the continuum valued models.

Apologies: if you read this just after I posted it. Throughout I wrote ‘Boolean algebra’ where I meant ‘lattice’ :-s. I have corrected this now.