Posts Tagged ‘Vagueness’

h1

B entails that a conjunction of determinate truths is determinate

October 26, 2010

I know it’s been quiet for a while around here. I have finally finished a paper on higher order vagueness which has been  a long time coming, and since I expect it to be in review for quite a while longer I decided to put it online. (Note: I’ll come back to the title of this post in a bit, after I’ve filled in some of the background.)

The paper is concerned with a number of arguments that purport to show that it is always a precise matter whether something is determinate at every finite order. This would entail, for example, that it was always a precise matter whether someone was determinately a child at every order, and thus, presumably, that this is also a knowable matter. But it seems just as bad to be able to know things like “I stopped being a determinate child at every order after 123098309851248 nanoseconds from my birth” as to know the corresponding kinds of things about being a child.

What could the premisses be that give such a paradoxical conclusion? One of the principles, distributivity, says that a (possibly infinite) conjunction of determinate truths is determinate, the other, B, says p \rightarrow \Delta\neg\Delta\neg p. If \Delta^* p is the conjunction of p, \Delta p, \Delta\Delta p, and so on, distributivity easily gives us (1) \Delta^*p \rightarrow\Delta\Delta^* p. Given a logic of K for determinacy we quickly get \Delta\neg\Delta\Delta^*p \rightarrow\Delta\neg\Delta^* p, which combined with \neg\Delta^* p\rightarrow \Delta\neg\Delta\Delta^* p (an instance of B) gives (2) \neg\Delta^* p\rightarrow\Delta\neg\Delta^* p. Excluded middle and (1) and (2) gives us \Delta\Delta^* p \vee \Delta\neg\Delta^* p, which is the bad conclusion.

In the paper I argue that B is the culprit.* The main moving part in Field’s solution to this problem, by contrast, is the rejection of distributivity. I think I finally have a conclusive argument that it is B that is responsible, and that is that B actually *entails* distributivity! In other words, no matter how you block the paradox you’ve got to deny B.

I think this is quite surprising and the argument is quite cute, so I’ve written it up in a note. I’ve put it in a pdf rather than post it up here, but it’s only two pages and the argument is actually only a few lines. Comments would be very welcome.

* Actually a whole chain of principles weaker than B can cause problems, the weakest which I consider being \Delta(p\rightarrow\Delta p)\rightarrow(\neg p \rightarrow \Delta\neg p), which corresponds to the frame condition: if x can see y, there is a finite chain of steps from y back to x each step of which x can see.

Advertisements
h1

Vagueness and uncertainty

June 17, 2009

My BPhil thesis is finally finished so I thought I’d post it here for anyone who’s interested.

h1

Unrestricted Composition: the argument from the semantic theory of vagueness?

May 14, 2009

I’ve seen the following claim made quite a lot in and out of print, so I’m wondering if I’m missing something. The claim is that Lewis’s argument for unrestricted composition relies on a semantic conception of vagueness. In particular, people seem to think epistemicists can avoid the argument.

Maybe I’m reading Lewis’s argument incorrectly, but I can’t see how this is possible. The argument seems to have three premisses

  1. If a complex expression is vague, then one of it’s constituents is vague.
  2. Neither the logical constants, nor the parthood relation are vague.
  3. Any answer to the special composition question that accords with intuitions must admit vague instances of composition.

By 3. one has that there (could be) a vague case of fusion: suppose it’s vague whether the xx fuse to make y. Thus it must be vague whether or not \forall x(x \circ y \leftrightarrow \exists z(z \prec xx \wedge z \circ x)). By 1. this means either parthood, or one of the logical constants is vague, which contradicts 2.

I can’t see any part of the argument that requires me to read `vague’ as `semantically indeterminate’. These seem to be all plausible principles about vagueness, and if, say, epistemicism doesn’t account for one of these principles, so much the worse for epistemicism.

That said, I think epistemicists should be committed to these principles. Since it would be a pretty far off world where we used English non-compositionally, the metalinguistic safety analysis of vagueness ensures that 1. holds. Epistemicists, like anyone else, think that the logical constants are precise. Parthood always was the weak link in the argument, but one might think you could vary usage quite a bit without changing the meaning of parthood since it refers to a natural relation, and is a reference magnet. Obviously the conclusion that the conditions for composition to occur are sharp isn’t puzzling for an epistemicist. But epistemicists think that vagueness is a much stronger property than sharpness (the latter being commonplace), and the conclusion that circumstances under which fusion occurs do not admit vague instances should be just as bad for an epistemicist as for anyone else who takes a medium position on the special composition question.

The most I can get from arguments that epistemicism offers a way out is roughly: “Epistemicists are used to biting bullets. Lewis’s argument requires you to bite bullets. Therefore we should be epistemicists.” Is this unfair?

h1

Truth Functionality

May 4, 2009

I’ve been thinking a lot about giving intended models to non-classical logics recently, and this has got me very muddled about truth functionality.

Truth functionality seems like such a simple notion. An n-ary connective, \oplus, is truth functional just in case the truth value of \oplus(p_1, \ldots, p_n) depends only on the truth values of p_1, \ldots, p_n.

But cashing out what “depends” means here is harder than it sounds. Consider, for example, the following (familiar) connectives.

  • |\Box p| = T iff, necessarily, |p| = T.
  • |p \vee q| = T iff |p| = T or |q| = T.

Why, in the second example but not the first, does the truth value of \Box p depend on the truth value of p? They’ve both been given in terms of the truth value of p. It would be correct, but circular, to say that the truth value of \Box p doesn’t depend on the truth value of p, because it’s truth value isn’t definable from the truth value of p using only truth functional vocabulary in the metalanguage. But clearly this isn’t helpful – for we want to know what counts as truth functional vocabulary whether in the metalanguage or anywhere. For example, what distinguishes the first from the second example. To say that \vee is truth functional and \Box isn’t because “or” is truth functional and “necessarily” isn’t, is totally unhelpful.

Usually the circularity is better hidden than this. For example, you can talk about “assignments” of truth values to sentence letters, and say that if two assignments agree on the truth values of p_1, \ldots, p_n then they’ll agree on \oplus(p_1, \ldots, p_n). But what are “assignments” and what is “agreement”? One could simply stipulate that assignments are functions in extension (sets of ordered pairs) and that f and g agree on some sentences if f(p)=g(p) for each such sentence p.

But there must be more restrictions that this: presumably the assignment that assigns p and q F and p \vee q T is not an acceptable assignment. There are assignments which give the same truth values to p and q, but different truth values to p \vee q, making disjunction non truth functional. Thus we must restrict ourselves to acceptable assignments; assignments which preserve truth functionality of the truth functional connectives.

Secondly, there needs to be enough assigments. The talk of assignments is only ok if there is an assignment corresponding to the intended assignment of truth values to English sentences. I beleive that it’s vague whether p, just in case it’s vague whether “p” is true (this follows from the assertion that the T-schema is determinate.) Thus if there’s vagueness in our langauge, we had better admit assignments such that it can be vague whether f(p)=T. Thus the restriction to precise assignments is not in general OK. Similarly, if you think the T-schema is necessary, the restriction of assignments to functions in extension is not innocent either – e.g., if p is true but not necessary, we need an assignment such that f(p)=T and that possibly f(p)=F.

Let me take an example where I think it really matters. A non-classical logician, for concreteness take a proponent of Lukasiewicz logic, will typically think there are more truth functional connectives (of a given arity) than the classical logician. For example, our Lukasiewicz logician thinks that the conditional is not definable from negation and disjunction. (NOTE: I do not mean truth functional on the continuum of truth values [0, 1] – I mean on {T, F} in a metalanguage where it can be vague that f(p)=T.)) “How can this be?” you ask, surely we can just count the truth tables: there are 2^{2^n} truth functional n-ary connectives.

To see why it’s not so simple consider a simple example. We want to calculate the truth table of p \rightarrow q.

  • p \rightarrow q: reads T just in case the second column reads T, if the the first column does.
  • p \vee q: reads T just in case the first or the second column reads T.
  • \neg p: reads T if the first column doesn’t read T.

The classical logician claims that the truth table for p \rightarrow q should be the same as the truth table for \neg p \vee q. This is because she accepts the equivelance between the “the first column is T if the second is” and “the second column is T or the first isn’t” in the metalanguage. However the non-classical logician denies this – the truth values will differ in cases where it is vague what truth value the first and second columns read. For example, if it is vague whether both columns read T, but the second reads T if the second does (suppose the second column reads T iff 87 is small, and the second column reads T iff 88 is small), then the column for \rightarrow will determinately read T. But the statement that \neg p \vee q reads T will be equivalent to an instance of excluded middle in the metalanguage which fails. So it will be vague in that case whether it reads T.

The case that \rightarrow is truth functional for this non-classical logician seems to me pretty compelling. But why, then, can we not make exactly the same case for the truth functionality of \Box p? I see almost no disanalogy in the reasoning. Suppose I deny that negation and the truth operator are the only unary truth functional connectives, I claim \Box p is a further one. However, the only cases where negation and the truth operator come apart from necessity is when it is contingent what the first column of the truth table reads.

I expect there is some way of unentangling all of this, but I think, at least, that the standard explanations of truth functionality fail to do this.

h1

The Sorites paradox and non-standard models of arithmetic

December 16, 2008

A standard Sorites paradox might run as follows:

  • 1 is small.
  • For every n, if n is small then n+1 is small.
  • There are non-small numbers.

On the face of it, these three principles are inconsistent, since the first two premisses entail that every number is small by the principle of induction. As far as I know, there is no theory of vagueness that gives us that these three sentences are true (and none of them false.) Nonetheless, it would be desirable if these sentences could be satisfied.

The principle of induction seems to do fine when we are dealing with precise concepts. Thus the induction schema for PA is fine, since it only says that it holds for properties definable in arithmetical vocabulary – all of which is precise. However, if we read the induction schema as open ended, that is, to hold even if we were to extend the language with new vocabulary, it is false. For it fails when we introduce into the language vague predicates.

The induction schema is usually proved by appealing to the fact that the naturals are well-ordered: every subset of the naturals has a least element. If the induction schema is going to fail if we allow vague sets, so should the well ordering principle. And that seems right: the set of large numbers doesn’t appear to have a least element – there is no first large number. So we have:

  • The set of large numbers has no smallest member.

Again no theory I know of delivers this verdict. The best we get is with non classical logics, where it is at best vague whether there exists a least element of the set of large numbers.

Finally, I think we should also hold the following:

  • For any particular number, n, you cannot assert that n is large.

That is, to assert of a given number, n, that it is large is to invite the Sorites paradox. You may assert that there exist large numbers, its just you can’t say exactly which they are. To assert that n is large, is to commit yourself to an inconsistency by standard Sorites reasoning, from n-1 true conditionals and the fact that 0 is not large.

The proposal I want to consider verifies all three of the bulletted points above. As it turns out, given a background of PA, the initial trio isn’t inconsistent after all. It’s merely \omegainconsistent (given we’re not assuming open ended induction.) But this doesn’t strike me as a bad thing in the context of vagueness, since after all, you can go through each of the natural numbers and convince me its not large by Sorites reasoning, but that shouldn’t shake my belief that there are large numbers.

\omega-inconsistent theories are formally consistent with the PA axioms, and thus have models by Gödel’s completeness theorem. These are called non-standard models of arithmetic. They basically have all the sets of naturals the ordinary natural numbers have, except they admit more subsets of the naturals – they admit vague sets of natural numbers as well as the old precise sets. Intuitively this is right – when we only had precise sets we got into all sorts of trouble. We couldn’t even talk about the set of large numbers because it didn’t exist; it was a vague set.

What is interesting is that some of these new sets of natural numbers don’t have smallest members. In fact, the set of all non-standard elements is one of these sets, but there are many others. So my suggestion here is that the set of large numbers is one of these non-standard sets of naturals.

Finally, we don’t want to be able to assert that n is large, for any given n, since that would lead us to true contradiction (via a long series of conditionals.) The idea is we may assert that there are large numbers out there, but we just cannot say which ones. On first glance this might seem incoherent, however, it is just another case of \omega-inconsistency. \{\neg Ln \mid n a numeral \} \cup \{\exists x Lx\} is formally consistent. For example, this is satisfied in any non-standard model of PA with L interpreted as the set of non-standard elements.

How to make sense of all this? Well, the first thing to bear in mind is that the non-standard models of arithmetic are not to be taken too seriously. They show that the view in question is consistent, and are also a good guide to seeing what sentences are in fact true. For example in a non-standard model the second order universally quantified induction axiom is false, since the second order quantifiers range over vague sets, however the induction schema is true, provided it only allows instances of properties definable in the language of arithmetic (this is how the schema is usually stated) since those instances define only precise sets. We should not think of the non-standard models as accurate guides to reality, however, since they are constructed from purely precise sets, of the kind ZFC deals with. For example, the set of non-standard elements is a precise set being used to model a vague set. Furthermore, the non-standard models are described as having an initial segment which are the “real” natural numbers, and then a block of non-standard naturals coming after them. The intended model of our theory shouldn’t have these extra elements, it should have the same numbers, just with more sets of numbers, vague and precise ones.

Another question is, which non-standard model makes the right (second order) sentences true? Since there are only countably many naturals, we can add a second order sentence stating this to our theory (we’d have to check it still means the same thing once the quantifiers range over vague sets as well.) This would force the model to be countable. Call the first order sentences true in the standard model plus the second order sentence saying the universe is countable, plus the statements: (i) 0 is small, (ii) for every n, if in is small, n+1 is small and (iii) there are non small numbers, T. T is still consistent (by the Lowenheim-Skolem theorem), and I think this will uniquely pick out our model as \mathbb{N} + \mathbb{Q} by a result from Skolem (I can’t quite remember the result right now, but maybe someone can correct me if its wrong.) This only gives us the interpretation for the second order quantifiers and the arithmetic vocabulary, obviously it won’t tell us how to interpret the vague vocabulary.

h1

Indeterminacy and knowledge

November 21, 2008

What do people think of this principle: determinate implication preserves indeterminacy? Formally1:

  • \Delta(p \rightarrow q) \rightarrow (\nabla p \rightarrow \nabla q)

If this principle is ok, and we accept that factivity of knowledge is determinate, it seems we can make trouble for the epistemicist, ignorance view of vagueness. That is, given:

  • \Delta(Kp \rightarrow p)

we can infer that \nabla p \rightarrow \nabla Kp: whenever p is indeterminate, it is indeterminate whether you know p. This, I take it, is incompatible with (determinate) ignorance concerning p.

[1 Note that, although this looks similar, it’s not quite the same as \Box (p \rightarrow q) \rightarrow (\Diamond p \rightarrow \Diamond q), which is a theorem of the weakest normal modal logic, K. \nabla and \Delta don’t stand in the same relation as \Diamond and \Box.]

h1

Higher Order Vagueness and Sharp Boundaries

September 1, 2008

One of the driving intuitions that motivates the rejection of bivalence, in the context of vagueness, is the intuition that there are no sharp cut off points. There is no number of nanoseconds such that anything living that long is young, but anything older would cease to be young (a supervaluationist will have to qualify this further, but something similar can be said for them too.) The thought is that the meaning determining factors, such as the way we use language, simply cannot determine such precise boundaries. Presumably there are many different precise interpretation of language that are compatible with our usage, and the other relevant factors.

The intuition extends further. Surely there is no sharp cut off point between being young, and being borderline young (and between being borderline and being not young.) There are borderline bordeline cases. And similarly there shouldn’t be sharp cut off points between youngness, and borderline borderline youngness etc… Thus there should be all kinds of orders of vagueness – at each level we escape sharp cut off points by positing higher levels of vagueness.

This line of thought is initially attractive, but it has its limitations. Surely there must be sharp cut off points between being truly young and failing to be truly young – where failing to be young includes being borderline young, or borderline borderline young, and so on. Basically, failing to be true involves anything less than full truth.

Talk of ‘full truth’ has to be taken with a pinch of salt. We have moved from talking about precise boundaries in the object language, to metatheoretical talk about truth values. This assumes that those who reject sharp boundaries identify the intended models with the kinds of many truth valued models used to characterise validity (my thinking on this was cleared up a lot by these helpful two posts by Robbie Williams.) Timothy Williamson offers this neat argument that we’ll be committed to sharp boundaries either way, and it can be couched purely in the object language. Suppose we have an operator, \Delta p, which says that p is determinately true. In the presence of higher order vagueness, we may have indeterminacy about whether p is determinate, and we may have indeterminacy about whether p is not determinate. I.e. \Delta fails to be governed by the S4 and S5 axioms respectively. However, we can introduce the following operator, which is supposed to represent our notion of being absolutely determinately true as the following infinite conjunction:

  • \Delta^\omega p := p \wedge \Delta p \wedge \Delta\Delta p \wedge \ldots

We assume one fact about \Delta. Namely: that an arbitrary conjunction of determinate truths is also determinate (we actually only need the claim for countable conjunctions.)

  • \Delta p_1 \wedge \Delta p_2 \wedge \ldots \equiv \Delta (p_1 \wedge p_2 \wedge \ldots )

From this we can deduce that \Delta^\omega p obeys S5 S4 [edit: only if you assume that \Delta obeys the B princple do we get S5. Thanks to Brian for correcting this] (its exactly the same way you show that common knowledge obeys S4, if you know that proof.) If \Delta^\omega p holds, then we have \Delta p \wedge \Delta\Delta p \wedge \Delta\Delta\Delta p \wedge \ldots by conjunction elimination, and by the definition of \Delta^\omega. By the assumed fact, this is equivalent to \Delta(p \wedge \Delta p \wedge \ldots ) which by definition is \Delta(\Delta^\omega) p. We then just iterate this, to get each finite iteration \Delta^n\Delta^\omega p and collect them together using an infinitary conjunction introduction rule to got \Delta^\omega\Delta^\omega p.

But what of the assumption that a conjunction of determinate truths is determinate? It seems pretty uncontroversial. In the finite case its obvious. If A and B are determinate, how can (A ^ B) fail to be? Where would the vagueness come from? Not A and B, by stipulation, and presumably not from ^ – it’s a logical constant after all. The infinitary case seems equally sound. However – the assumption is not completely beyond doubt. For example in some recent papers Field has to reject the principle. On his model theory, p is definitely true if a sequence of three valued interpretation eventually converges to 1 – after some point it is 1’s all the way. Now its possible for each conjunct, of an infinite conjunction, to all converge on 1, but that there is no point along the sequence such that all of them are 1 from then on.

I think we can actually do away with Williamsons’ assumption – all we need is factivity for \Delta.

  • \Delta p \rightarrow p

To state the idea we’re going to have to make some standard assumptions about the structure of the truth values sentences can take. We’ll then look at how to eliminate reference to truth values all together, so we do not beg the question against those who just take the many valued semantics as a way to characterise validity and consequence relations. The assumption is that the set of truth values is a complete lattice, and that conjunction, disjunction, negation, etc… all take the obvious interpretations: meet, join, complement and so on. They form a lattice because conjunction, negation etc… must retain their characteristic features, and the lattice is complete because we can take arbitrary conjunctions and disjunctions. As far as I know, this assumption is true of all the major contenders: the three valued logic of Tye, continuum valued logic of Lukasiewicz’, supervaluationism (where elements in the lattice are just sets of precisifications), intuitionism and, of course, bivalent theories are all complete lattices.

In this framework the interpretation of \Delta will just be a function, \delta, from the lattice to itself and the sharp boundaries hypothesis corresponds to obtaining a fixpoint of this function for each element by iterating \delta. There are a variety of fixpoint theorems available for continuous functions (these are obtained by iterating \omega many times – indeed the infinitary distributivity of \Delta in the Williamson proof is just the assumption of continuity for \delta.) However, it is also true that every monotonic function will have fixpoint that is obtained by iterating – the only difference is that we will have to iterate for longer than omega. (Note that our assumption of factivity for \Delta ensures monotonicity of \delta – i.e. that \delta (x) \sqsubseteq x.) Define \delta^\alpha for ordinals alpha recursively as follows:

  • \delta^0(a) := a
  • \delta^{\alpha + 1}(a) := \delta(\delta^\alpha(a))
  • \delta^\gamma (a) : = \sqcap_{\alpha < \gamma}\delta^\alpha(a)

(We can define \Delta^\alpha analogously if we assume a language which allows for arbitrary conjunctions.) Fix a lattice, B, and let \kappa := sup\{ \alpha \mid \mbox{there is an } \alpha \mbox{ length chain in B}\} where a chain is simply a subset of B that is linearly ordered. It is easy to see that \delta(\delta^\kappa(a)) = \delta^\kappa(a) for every a in B, and thus that \delta^\kappa(\delta^\kappa(a)) = \delta^\kappa(a). This ensures the S4 axiom for \Delta^\kappa

  • \Delta^\kappa p \rightarrow \Delta^\kappa\Delta^\kappa p

We can show the S5 axiom too.

Note that trivially, every (weakly) monotonic function on a lattice has a fixpoint: the top/bottom element. The crucial point of the above is that we are able to define the fixpoint in the language. Secondly, note also that \kappa depends on the size of the lattice in question – which allows us to calculate \kappa for most of the popular theories in the literature. For bivalent and n-valent logics its finite. For continuum valued fuzzy logics the longest chain is just 2^{\aleph_0}. Supervaluationism is a bit harder. Precisifications are just functions from a countable set of predicates to sets of objects. Lewis estimates the number of objects is at most \beth_2, making the number of extensions \beth_3. Obviously \beth_3^{\aleph_0} = \beth_3 so the longest chains are at most \beth_4 (and I’m pretty sure you can always get chains that long.)

All the theories, except possibly supervaluationism, define validity with respect to a fixed lattice. Things get a bit more complicated if you define validity with respect to a class of lattices. Since the class might have arbitrarily large  lattices, there’s no guarantee we can find a big enough \kappa. But that said, I think we could instead introduce a new operator, \Delta^* p := \forall \alpha \in On \Delta^\alpha p which would do the trick instead (I know, I’m being careless here with notation – but I’m pretty sure it could be made rigorous.)

Final note: what if you take the role of the many valued lattice models to be a way to characterise validity rather than as a semantics for the language? For example – couldn’t you adopt a continuum valued logic, but withhold identifying the values in [0,1] with truth values. The worry still remains however. Let c := 2^{\aleph_0}. Since \Delta^c p \rightarrow \Delta^c\Delta^c p is true in all continuum valued models, it is valid. Valid gives us true, so we end up with sharp boundaries in the intended model – even though the intended model needn’t look anything like the continuum valued models.

Apologies: if you read this just after I posted it. Throughout I wrote ‘Boolean algebra’ where I meant ‘lattice’ :-s. I have corrected this now.