h1

Higher Order Vagueness and Sharp Boundaries

September 1, 2008

One of the driving intuitions that motivates the rejection of bivalence, in the context of vagueness, is the intuition that there are no sharp cut off points. There is no number of nanoseconds such that anything living that long is young, but anything older would cease to be young (a supervaluationist will have to qualify this further, but something similar can be said for them too.) The thought is that the meaning determining factors, such as the way we use language, simply cannot determine such precise boundaries. Presumably there are many different precise interpretation of language that are compatible with our usage, and the other relevant factors.

The intuition extends further. Surely there is no sharp cut off point between being young, and being borderline young (and between being borderline and being not young.) There are borderline bordeline cases. And similarly there shouldn’t be sharp cut off points between youngness, and borderline borderline youngness etc… Thus there should be all kinds of orders of vagueness – at each level we escape sharp cut off points by positing higher levels of vagueness.

This line of thought is initially attractive, but it has its limitations. Surely there must be sharp cut off points between being truly young and failing to be truly young – where failing to be young includes being borderline young, or borderline borderline young, and so on. Basically, failing to be true involves anything less than full truth.

Talk of ‘full truth’ has to be taken with a pinch of salt. We have moved from talking about precise boundaries in the object language, to metatheoretical talk about truth values. This assumes that those who reject sharp boundaries identify the intended models with the kinds of many truth valued models used to characterise validity (my thinking on this was cleared up a lot by these helpful two posts by Robbie Williams.) Timothy Williamson offers this neat argument that we’ll be committed to sharp boundaries either way, and it can be couched purely in the object language. Suppose we have an operator, \Delta p, which says that p is determinately true. In the presence of higher order vagueness, we may have indeterminacy about whether p is determinate, and we may have indeterminacy about whether p is not determinate. I.e. \Delta fails to be governed by the S4 and S5 axioms respectively. However, we can introduce the following operator, which is supposed to represent our notion of being absolutely determinately true as the following infinite conjunction:

  • \Delta^\omega p := p \wedge \Delta p \wedge \Delta\Delta p \wedge \ldots

We assume one fact about \Delta. Namely: that an arbitrary conjunction of determinate truths is also determinate (we actually only need the claim for countable conjunctions.)

  • \Delta p_1 \wedge \Delta p_2 \wedge \ldots \equiv \Delta (p_1 \wedge p_2 \wedge \ldots )

From this we can deduce that \Delta^\omega p obeys S5 S4 [edit: only if you assume that \Delta obeys the B princple do we get S5. Thanks to Brian for correcting this] (its exactly the same way you show that common knowledge obeys S4, if you know that proof.) If \Delta^\omega p holds, then we have \Delta p \wedge \Delta\Delta p \wedge \Delta\Delta\Delta p \wedge \ldots by conjunction elimination, and by the definition of \Delta^\omega. By the assumed fact, this is equivalent to \Delta(p \wedge \Delta p \wedge \ldots ) which by definition is \Delta(\Delta^\omega) p. We then just iterate this, to get each finite iteration \Delta^n\Delta^\omega p and collect them together using an infinitary conjunction introduction rule to got \Delta^\omega\Delta^\omega p.

But what of the assumption that a conjunction of determinate truths is determinate? It seems pretty uncontroversial. In the finite case its obvious. If A and B are determinate, how can (A ^ B) fail to be? Where would the vagueness come from? Not A and B, by stipulation, and presumably not from ^ – it’s a logical constant after all. The infinitary case seems equally sound. However – the assumption is not completely beyond doubt. For example in some recent papers Field has to reject the principle. On his model theory, p is definitely true if a sequence of three valued interpretation eventually converges to 1 – after some point it is 1’s all the way. Now its possible for each conjunct, of an infinite conjunction, to all converge on 1, but that there is no point along the sequence such that all of them are 1 from then on.

I think we can actually do away with Williamsons’ assumption – all we need is factivity for \Delta.

  • \Delta p \rightarrow p

To state the idea we’re going to have to make some standard assumptions about the structure of the truth values sentences can take. We’ll then look at how to eliminate reference to truth values all together, so we do not beg the question against those who just take the many valued semantics as a way to characterise validity and consequence relations. The assumption is that the set of truth values is a complete lattice, and that conjunction, disjunction, negation, etc… all take the obvious interpretations: meet, join, complement and so on. They form a lattice because conjunction, negation etc… must retain their characteristic features, and the lattice is complete because we can take arbitrary conjunctions and disjunctions. As far as I know, this assumption is true of all the major contenders: the three valued logic of Tye, continuum valued logic of Lukasiewicz’, supervaluationism (where elements in the lattice are just sets of precisifications), intuitionism and, of course, bivalent theories are all complete lattices.

In this framework the interpretation of \Delta will just be a function, \delta, from the lattice to itself and the sharp boundaries hypothesis corresponds to obtaining a fixpoint of this function for each element by iterating \delta. There are a variety of fixpoint theorems available for continuous functions (these are obtained by iterating \omega many times – indeed the infinitary distributivity of \Delta in the Williamson proof is just the assumption of continuity for \delta.) However, it is also true that every monotonic function will have fixpoint that is obtained by iterating – the only difference is that we will have to iterate for longer than omega. (Note that our assumption of factivity for \Delta ensures monotonicity of \delta – i.e. that \delta (x) \sqsubseteq x.) Define \delta^\alpha for ordinals alpha recursively as follows:

  • \delta^0(a) := a
  • \delta^{\alpha + 1}(a) := \delta(\delta^\alpha(a))
  • \delta^\gamma (a) : = \sqcap_{\alpha < \gamma}\delta^\alpha(a)

(We can define \Delta^\alpha analogously if we assume a language which allows for arbitrary conjunctions.) Fix a lattice, B, and let \kappa := sup\{ \alpha \mid \mbox{there is an } \alpha \mbox{ length chain in B}\} where a chain is simply a subset of B that is linearly ordered. It is easy to see that \delta(\delta^\kappa(a)) = \delta^\kappa(a) for every a in B, and thus that \delta^\kappa(\delta^\kappa(a)) = \delta^\kappa(a). This ensures the S4 axiom for \Delta^\kappa

  • \Delta^\kappa p \rightarrow \Delta^\kappa\Delta^\kappa p

We can show the S5 axiom too.

Note that trivially, every (weakly) monotonic function on a lattice has a fixpoint: the top/bottom element. The crucial point of the above is that we are able to define the fixpoint in the language. Secondly, note also that \kappa depends on the size of the lattice in question – which allows us to calculate \kappa for most of the popular theories in the literature. For bivalent and n-valent logics its finite. For continuum valued fuzzy logics the longest chain is just 2^{\aleph_0}. Supervaluationism is a bit harder. Precisifications are just functions from a countable set of predicates to sets of objects. Lewis estimates the number of objects is at most \beth_2, making the number of extensions \beth_3. Obviously \beth_3^{\aleph_0} = \beth_3 so the longest chains are at most \beth_4 (and I’m pretty sure you can always get chains that long.)

All the theories, except possibly supervaluationism, define validity with respect to a fixed lattice. Things get a bit more complicated if you define validity with respect to a class of lattices. Since the class might have arbitrarily large  lattices, there’s no guarantee we can find a big enough \kappa. But that said, I think we could instead introduce a new operator, \Delta^* p := \forall \alpha \in On \Delta^\alpha p which would do the trick instead (I know, I’m being careless here with notation – but I’m pretty sure it could be made rigorous.)

Final note: what if you take the role of the many valued lattice models to be a way to characterise validity rather than as a semantics for the language? For example – couldn’t you adopt a continuum valued logic, but withhold identifying the values in [0,1] with truth values. The worry still remains however. Let c := 2^{\aleph_0}. Since \Delta^c p \rightarrow \Delta^c\Delta^c p is true in all continuum valued models, it is valid. Valid gives us true, so we end up with sharp boundaries in the intended model – even though the intended model needn’t look anything like the continuum valued models.

Apologies: if you read this just after I posted it. Throughout I wrote ‘Boolean algebra’ where I meant ‘lattice’ :-s. I have corrected this now.

Advertisements

5 comments

  1. I don’t see where S5 is meant to fall out of this. It seems to me to be fairly easy to come up with models where MLp -> p fails whether L is interpreted as determinately p, determinately determinately p, etc.

    Here’s one such example. Assume a small Kripke frame with three worlds, 1, 2 and 3. And assume that the accessibility function R is {1R1, 2R1, 2R2, 2R3, 3R3}. And assume that V(p) = 1. (The intended interpretation is that determinately p is just Lp at point 2 of the model.)

    Then MLp -> p fails. And MMLLp -> p fails, etc, etc.

    So I agree that we get an S4 type result here, but I’m missing how to extend this to S5.


  2. Hi Brian,

    Thanks for pointing that out! I forgot this when I wrote the post, but actually the S5 principle for \Box^\omega needs the Brouwerian principle for \Box as well. (And you can see this will do because the transitive closure of a symmetric relation is always Euclidean.) I really should have checked that before writing it!

    That said, I’m not too bothered about the weakening. I think the B principle is quite plausible for \Delta anyway, and even S4 is beginning to tread on the ‘no sharp boundary’ intuitions I began the post with.


  3. […] – bookmarked by 6 members originally found by luvpink4ever22 on 2009-01-18 Higher Order Vagueness and Sharp Boundaries https://possiblyphilosophy.wordpress.com/2008/09/01/higher-order-vagueness-and-sharp-boundaries/ – […]


  4. […] to the claim that a conjunction of determinate truths is determinate, which as I’ve argued before, cannot be denied unless one either denies that infinitary conjunction is precise or that vagueness […]


  5. Thanks for the inhgtsi. It brings light into the dark!



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: