Posts Tagged ‘Boolean algebra’

h1

Higher Order Vagueness and Sharp Boundaries

September 1, 2008

One of the driving intuitions that motivates the rejection of bivalence, in the context of vagueness, is the intuition that there are no sharp cut off points. There is no number of nanoseconds such that anything living that long is young, but anything older would cease to be young (a supervaluationist will have to qualify this further, but something similar can be said for them too.) The thought is that the meaning determining factors, such as the way we use language, simply cannot determine such precise boundaries. Presumably there are many different precise interpretation of language that are compatible with our usage, and the other relevant factors.

The intuition extends further. Surely there is no sharp cut off point between being young, and being borderline young (and between being borderline and being not young.) There are borderline bordeline cases. And similarly there shouldn’t be sharp cut off points between youngness, and borderline borderline youngness etc… Thus there should be all kinds of orders of vagueness – at each level we escape sharp cut off points by positing higher levels of vagueness.

This line of thought is initially attractive, but it has its limitations. Surely there must be sharp cut off points between being truly young and failing to be truly young – where failing to be young includes being borderline young, or borderline borderline young, and so on. Basically, failing to be true involves anything less than full truth.

Talk of ‘full truth’ has to be taken with a pinch of salt. We have moved from talking about precise boundaries in the object language, to metatheoretical talk about truth values. This assumes that those who reject sharp boundaries identify the intended models with the kinds of many truth valued models used to characterise validity (my thinking on this was cleared up a lot by these helpful two posts by Robbie Williams.) Timothy Williamson offers this neat argument that we’ll be committed to sharp boundaries either way, and it can be couched purely in the object language. Suppose we have an operator, \Delta p, which says that p is determinately true. In the presence of higher order vagueness, we may have indeterminacy about whether p is determinate, and we may have indeterminacy about whether p is not determinate. I.e. \Delta fails to be governed by the S4 and S5 axioms respectively. However, we can introduce the following operator, which is supposed to represent our notion of being absolutely determinately true as the following infinite conjunction:

  • \Delta^\omega p := p \wedge \Delta p \wedge \Delta\Delta p \wedge \ldots

We assume one fact about \Delta. Namely: that an arbitrary conjunction of determinate truths is also determinate (we actually only need the claim for countable conjunctions.)

  • \Delta p_1 \wedge \Delta p_2 \wedge \ldots \equiv \Delta (p_1 \wedge p_2 \wedge \ldots )

From this we can deduce that \Delta^\omega p obeys S5 S4 [edit: only if you assume that \Delta obeys the B princple do we get S5. Thanks to Brian for correcting this] (its exactly the same way you show that common knowledge obeys S4, if you know that proof.) If \Delta^\omega p holds, then we have \Delta p \wedge \Delta\Delta p \wedge \Delta\Delta\Delta p \wedge \ldots by conjunction elimination, and by the definition of \Delta^\omega. By the assumed fact, this is equivalent to \Delta(p \wedge \Delta p \wedge \ldots ) which by definition is \Delta(\Delta^\omega) p. We then just iterate this, to get each finite iteration \Delta^n\Delta^\omega p and collect them together using an infinitary conjunction introduction rule to got \Delta^\omega\Delta^\omega p.

But what of the assumption that a conjunction of determinate truths is determinate? It seems pretty uncontroversial. In the finite case its obvious. If A and B are determinate, how can (A ^ B) fail to be? Where would the vagueness come from? Not A and B, by stipulation, and presumably not from ^ – it’s a logical constant after all. The infinitary case seems equally sound. However – the assumption is not completely beyond doubt. For example in some recent papers Field has to reject the principle. On his model theory, p is definitely true if a sequence of three valued interpretation eventually converges to 1 – after some point it is 1’s all the way. Now its possible for each conjunct, of an infinite conjunction, to all converge on 1, but that there is no point along the sequence such that all of them are 1 from then on.

I think we can actually do away with Williamsons’ assumption – all we need is factivity for \Delta.

  • \Delta p \rightarrow p

To state the idea we’re going to have to make some standard assumptions about the structure of the truth values sentences can take. We’ll then look at how to eliminate reference to truth values all together, so we do not beg the question against those who just take the many valued semantics as a way to characterise validity and consequence relations. The assumption is that the set of truth values is a complete lattice, and that conjunction, disjunction, negation, etc… all take the obvious interpretations: meet, join, complement and so on. They form a lattice because conjunction, negation etc… must retain their characteristic features, and the lattice is complete because we can take arbitrary conjunctions and disjunctions. As far as I know, this assumption is true of all the major contenders: the three valued logic of Tye, continuum valued logic of Lukasiewicz’, supervaluationism (where elements in the lattice are just sets of precisifications), intuitionism and, of course, bivalent theories are all complete lattices.

In this framework the interpretation of \Delta will just be a function, \delta, from the lattice to itself and the sharp boundaries hypothesis corresponds to obtaining a fixpoint of this function for each element by iterating \delta. There are a variety of fixpoint theorems available for continuous functions (these are obtained by iterating \omega many times – indeed the infinitary distributivity of \Delta in the Williamson proof is just the assumption of continuity for \delta.) However, it is also true that every monotonic function will have fixpoint that is obtained by iterating – the only difference is that we will have to iterate for longer than omega. (Note that our assumption of factivity for \Delta ensures monotonicity of \delta – i.e. that \delta (x) \sqsubseteq x.) Define \delta^\alpha for ordinals alpha recursively as follows:

  • \delta^0(a) := a
  • \delta^{\alpha + 1}(a) := \delta(\delta^\alpha(a))
  • \delta^\gamma (a) : = \sqcap_{\alpha < \gamma}\delta^\alpha(a)

(We can define \Delta^\alpha analogously if we assume a language which allows for arbitrary conjunctions.) Fix a lattice, B, and let \kappa := sup\{ \alpha \mid \mbox{there is an } \alpha \mbox{ length chain in B}\} where a chain is simply a subset of B that is linearly ordered. It is easy to see that \delta(\delta^\kappa(a)) = \delta^\kappa(a) for every a in B, and thus that \delta^\kappa(\delta^\kappa(a)) = \delta^\kappa(a). This ensures the S4 axiom for \Delta^\kappa

  • \Delta^\kappa p \rightarrow \Delta^\kappa\Delta^\kappa p

We can show the S5 axiom too.

Note that trivially, every (weakly) monotonic function on a lattice has a fixpoint: the top/bottom element. The crucial point of the above is that we are able to define the fixpoint in the language. Secondly, note also that \kappa depends on the size of the lattice in question – which allows us to calculate \kappa for most of the popular theories in the literature. For bivalent and n-valent logics its finite. For continuum valued fuzzy logics the longest chain is just 2^{\aleph_0}. Supervaluationism is a bit harder. Precisifications are just functions from a countable set of predicates to sets of objects. Lewis estimates the number of objects is at most \beth_2, making the number of extensions \beth_3. Obviously \beth_3^{\aleph_0} = \beth_3 so the longest chains are at most \beth_4 (and I’m pretty sure you can always get chains that long.)

All the theories, except possibly supervaluationism, define validity with respect to a fixed lattice. Things get a bit more complicated if you define validity with respect to a class of lattices. Since the class might have arbitrarily largeĀ  lattices, there’s no guarantee we can find a big enough \kappa. But that said, I think we could instead introduce a new operator, \Delta^* p := \forall \alpha \in On \Delta^\alpha p which would do the trick instead (I know, I’m being careless here with notation – but I’m pretty sure it could be made rigorous.)

Final note: what if you take the role of the many valued lattice models to be a way to characterise validity rather than as a semantics for the language? For example – couldn’t you adopt a continuum valued logic, but withhold identifying the values in [0,1] with truth values. The worry still remains however. Let c := 2^{\aleph_0}. Since \Delta^c p \rightarrow \Delta^c\Delta^c p is true in all continuum valued models, it is valid. Valid gives us true, so we end up with sharp boundaries in the intended model – even though the intended model needn’t look anything like the continuum valued models.

Apologies: if you read this just after I posted it. Throughout I wrote ‘Boolean algebra’ where I meant ‘lattice’ :-s. I have corrected this now.

Advertisements
h1

How big is the universe?

December 27, 2007

I’ve been reading some interesting old posts from Kenny Easwaran’s blog (and this one from Brian Weatherson) recently, so I’m going to take a break from topless mereology for a bit and consider what happens when unrestricted composition does hold. In particular, I want to know how many things there can be in the presence of UC. (I shall use the expression “there can be kappa many things” to mean there is a kappa sized model of plural mereology.)

The first observations are that the universe will be of size 2^\kappa or 2^\kappa-1 when kappa is infinite and finite respectively, and the the world contains exactly kappa many atoms, and no gunk. If there is gunk there are infinitely many disjoint things (suppose there were n: a_1 \ldots a_n. a_1 would have a proper part b. a_1 - b, and b would be distinct from a_1 ... a_n and disjoint so there would n+1 disjoint things, contradiction) and thus uncountably many fusions of them. For mixed gunky/pointy universes their size is just the max of the number of gunky things and the number of non-gunky things. So we know there cannot be any countable models. (This is why we didn’t use a first order theory of mereology.)

Kenny cites the result that every infinite complete boolean algebra has size \kappa, where \kappa^{\aleph_0} = \kappa. The sufficient conditions for \kappa to be a size of the universe were left unclear, although he mentioned that if \kappa was inaccessible, then you can find a model of that size. I think you can do a bit better than this: if \kappa^{\aleph_0} = \kappa then you can have a \kappa sized model, which combining these results gives us:

\kappa is a possible size of the universe iff \kappa = 2^n - 1 for some finite n, or \kappa^{\aleph_0} = \kappa.

To see the the right-left direction, let \kappa = \kappa^{\aleph_0} and take the regular open sets in 2^\kappa (the Tychonoff product of the discrete topology on 2.) There are \kappa regular open sets in 2^\kappa, which is slightly surprising. I’m not going to write the proof out here, but it helps to know that 2^\kappa has the Suslin property. The interesting thing is that this construction is always gunky, so all the possible sizes are exhausted by gunky worlds alone – considering atoms does not change anything size wise.