h1

Vagueness and uncertainty

June 17, 2009

My BPhil thesis is finally finished so I thought I’d post it here for anyone who’s interested.

h1

Unrestricted Composition: the argument from the semantic theory of vagueness?

May 14, 2009

I’ve seen the following claim made quite a lot in and out of print, so I’m wondering if I’m missing something. The claim is that Lewis’s argument for unrestricted composition relies on a semantic conception of vagueness. In particular, people seem to think epistemicists can avoid the argument.

Maybe I’m reading Lewis’s argument incorrectly, but I can’t see how this is possible. The argument seems to have three premisses

  1. If a complex expression is vague, then one of it’s constituents is vague.
  2. Neither the logical constants, nor the parthood relation are vague.
  3. Any answer to the special composition question that accords with intuitions must admit vague instances of composition.

By 3. one has that there (could be) a vague case of fusion: suppose it’s vague whether the xx fuse to make y. Thus it must be vague whether or not \forall x(x \circ y \leftrightarrow \exists z(z \prec xx \wedge z \circ x)). By 1. this means either parthood, or one of the logical constants is vague, which contradicts 2.

I can’t see any part of the argument that requires me to read `vague’ as `semantically indeterminate’. These seem to be all plausible principles about vagueness, and if, say, epistemicism doesn’t account for one of these principles, so much the worse for epistemicism.

That said, I think epistemicists should be committed to these principles. Since it would be a pretty far off world where we used English non-compositionally, the metalinguistic safety analysis of vagueness ensures that 1. holds. Epistemicists, like anyone else, think that the logical constants are precise. Parthood always was the weak link in the argument, but one might think you could vary usage quite a bit without changing the meaning of parthood since it refers to a natural relation, and is a reference magnet. Obviously the conclusion that the conditions for composition to occur are sharp isn’t puzzling for an epistemicist. But epistemicists think that vagueness is a much stronger property than sharpness (the latter being commonplace), and the conclusion that circumstances under which fusion occurs do not admit vague instances should be just as bad for an epistemicist as for anyone else who takes a medium position on the special composition question.

The most I can get from arguments that epistemicism offers a way out is roughly: “Epistemicists are used to biting bullets. Lewis’s argument requires you to bite bullets. Therefore we should be epistemicists.” Is this unfair?

h1

Truth Functionality

May 4, 2009

I’ve been thinking a lot about giving intended models to non-classical logics recently, and this has got me very muddled about truth functionality.

Truth functionality seems like such a simple notion. An n-ary connective, \oplus, is truth functional just in case the truth value of \oplus(p_1, \ldots, p_n) depends only on the truth values of p_1, \ldots, p_n.

But cashing out what “depends” means here is harder than it sounds. Consider, for example, the following (familiar) connectives.

  • |\Box p| = T iff, necessarily, |p| = T.
  • |p \vee q| = T iff |p| = T or |q| = T.

Why, in the second example but not the first, does the truth value of \Box p depend on the truth value of p? They’ve both been given in terms of the truth value of p. It would be correct, but circular, to say that the truth value of \Box p doesn’t depend on the truth value of p, because it’s truth value isn’t definable from the truth value of p using only truth functional vocabulary in the metalanguage. But clearly this isn’t helpful – for we want to know what counts as truth functional vocabulary whether in the metalanguage or anywhere. For example, what distinguishes the first from the second example. To say that \vee is truth functional and \Box isn’t because “or” is truth functional and “necessarily” isn’t, is totally unhelpful.

Usually the circularity is better hidden than this. For example, you can talk about “assignments” of truth values to sentence letters, and say that if two assignments agree on the truth values of p_1, \ldots, p_n then they’ll agree on \oplus(p_1, \ldots, p_n). But what are “assignments” and what is “agreement”? One could simply stipulate that assignments are functions in extension (sets of ordered pairs) and that f and g agree on some sentences if f(p)=g(p) for each such sentence p.

But there must be more restrictions that this: presumably the assignment that assigns p and q F and p \vee q T is not an acceptable assignment. There are assignments which give the same truth values to p and q, but different truth values to p \vee q, making disjunction non truth functional. Thus we must restrict ourselves to acceptable assignments; assignments which preserve truth functionality of the truth functional connectives.

Secondly, there needs to be enough assigments. The talk of assignments is only ok if there is an assignment corresponding to the intended assignment of truth values to English sentences. I beleive that it’s vague whether p, just in case it’s vague whether “p” is true (this follows from the assertion that the T-schema is determinate.) Thus if there’s vagueness in our langauge, we had better admit assignments such that it can be vague whether f(p)=T. Thus the restriction to precise assignments is not in general OK. Similarly, if you think the T-schema is necessary, the restriction of assignments to functions in extension is not innocent either – e.g., if p is true but not necessary, we need an assignment such that f(p)=T and that possibly f(p)=F.

Let me take an example where I think it really matters. A non-classical logician, for concreteness take a proponent of Lukasiewicz logic, will typically think there are more truth functional connectives (of a given arity) than the classical logician. For example, our Lukasiewicz logician thinks that the conditional is not definable from negation and disjunction. (NOTE: I do not mean truth functional on the continuum of truth values [0, 1] – I mean on {T, F} in a metalanguage where it can be vague that f(p)=T.)) “How can this be?” you ask, surely we can just count the truth tables: there are 2^{2^n} truth functional n-ary connectives.

To see why it’s not so simple consider a simple example. We want to calculate the truth table of p \rightarrow q.

  • p \rightarrow q: reads T just in case the second column reads T, if the the first column does.
  • p \vee q: reads T just in case the first or the second column reads T.
  • \neg p: reads T if the first column doesn’t read T.

The classical logician claims that the truth table for p \rightarrow q should be the same as the truth table for \neg p \vee q. This is because she accepts the equivelance between the “the first column is T if the second is” and “the second column is T or the first isn’t” in the metalanguage. However the non-classical logician denies this – the truth values will differ in cases where it is vague what truth value the first and second columns read. For example, if it is vague whether both columns read T, but the second reads T if the second does (suppose the second column reads T iff 87 is small, and the second column reads T iff 88 is small), then the column for \rightarrow will determinately read T. But the statement that \neg p \vee q reads T will be equivalent to an instance of excluded middle in the metalanguage which fails. So it will be vague in that case whether it reads T.

The case that \rightarrow is truth functional for this non-classical logician seems to me pretty compelling. But why, then, can we not make exactly the same case for the truth functionality of \Box p? I see almost no disanalogy in the reasoning. Suppose I deny that negation and the truth operator are the only unary truth functional connectives, I claim \Box p is a further one. However, the only cases where negation and the truth operator come apart from necessity is when it is contingent what the first column of the truth table reads.

I expect there is some way of unentangling all of this, but I think, at least, that the standard explanations of truth functionality fail to do this.

h1

Field on Restall’s Paradox

April 23, 2009

I’ve been casually reading Field’s “Saving Truth from Paradox” for some time now. I think it’s a fantastic book, and I highly recommend it to anyone interested in the philosophy of logic, truth or vagueness.

I’ve just read Ch. 21 where he discusses a paradox presented in Restall 2006. The discussion was very enlightening for me, since I had often thought this paradox to be fatal to non-classical solutions to the liar. But although Fields discussion convinced me Restall’s argument wasn’t as watertight as I thought it was, I was still left a bit uneasy. (I think there is something wrong with Restall’s argument that Field doesn’t consider, but I’ll come to that.)

Before I continue, I should state the paradox. The problem is that if one has a strong negation in the language, \neg, one can generate a paradoxical liar sentence which says of itself that it’s strongly not true. Strong negation has the following properties which ensures that that last sentence is inconsistent:

  1. p, \neg p \models \bot
  2. If \Gamma , p \models \bot then \Gamma \models \neg p

Roughly, the strong negation of p is the weakest proposition inconsistent with p – the first condition guarantees that it’s inconsistent with p, the second that it’s the weakest such proposition. It’s not too hard to see why having such a connective will cause havoc.

Restall’s insight (which was originally made to motivate a “strong” conditional, but it amounts to the same thing) was that one can get such a proposition by brute force: the weakest proposition inconsistent with p is equivalent to the disjunction of all propositions inconsistent with p. Thus, introducing infinitary disjunction into the language, we may just “define” \neg p to be \bigvee \{q \mid p \wedge q \models \bot \}. Each disjunct is inconsistent with p so the whole disjunction must be inconsistent with p, giving us the first condition. If q is inconsistent with p, then q is one of the disjuncts in \neg p so q entails \neg p, giving us (more or less) the second condition.

An initial problem Field points out is that this definition is horribly impredicative – \neg p is inconsistent with p, so \neg p must be one of it’s own disjuncts. Field complains that such non-well founded sentences give rise to paradoxes even without the truth predicate, for example, the sentence that is it’s own negation. (I personally don’t find these kinds of languages too bad, but maybe that’s best left for another post.) This problem is overcome since you can run a variant of the argument by only disjoining atomic formulae so long as you have a truth predicate.

The second point, Field’s supposed rebuttal of the argument, is that to specify a disjunction by a condition, F say, on the disjuncts, you must first show F isn’t vague or indeterminate, or else you’ll end up with sentences such that it is vague/indeterminate what their components are. Allowing such sentences means they can enter into vague/indeterminate relations of validity – for example, it is vague whether a sentence such that it is vague whether it has “snow is white” as a conjunct entails “snow is white”. But the property F, in this case, is the property of entailing a contradiction if conjoined with p. Thus to assess whether F is vague/indeterminate or not, we must ask if entailment can ever be vague. But to do this we must determine whether there are sentences in the language such that it is indeterminate what their components are. Since the language contains the disjunction of the F’s, this requires us to determine whether F is vague – so we have gone in a circle.

Clearly something weird is going on. That said, I don’t quite see how this observation refutes the argument. It’s perfectly consistent with what’s been said above that entailment for the expanded language with infinitary disjunction is precise, that there is a precise disjunction of the things inconsistent with p, and that Restall’s argument goes through unproblematically. It’s also consistent that there *are* vague cases of entailment – but that the two conditions for strong negation above determinately obtain (there are some subtle issues that must be decided here, e.g., is “p and q” determinately distinct from the sentence that has p as its first conjunct, but only has q as its second conjunct indeterminately.)

Even so, I think there are a couple of problems with Restall’s argument. The first is a minor problem. To define the relevant disjunction, we must talk about the property of “entailing a contradiction if conjoined with p”. But to do this we are treating “entails” like it was a connective in the language. However, one of Fields crucial insights is that “A entails B” is not an assertion of some kind of implication holding between A and B, but rather the conditional assertion of A on B. “entails” cannot be thought of like a connective. For one thing, connectives are embeddable, whereas it doesn’t make much sense to talk of embedded conditional assertions. Secondly, a point which I don’t think Field makes explicit, is that it is crucial that “entails” doesn’t work like an embeddable connective, otherwise one could run a form of Curry’s paradox using entailment instead of the conditional.

This not supposed to be a knockdown problem. After all, so what if you can’t *define* strong negation, there is, nonetheless, this disjunction whose disjuncts are just those propositions inconistent with p. We may not be able to define it or refer to it, but God knows which one it is all the same.

The real problem, I think, is the following. How are we construing \neg p? Is it a new connective in the language, stipulated to mean the same as “the disjunction of those things inconsistent with p”? If it is, how do we know it is a logical connective? (If \neg weren’t logical neither (1) nor (2) would hold, since there would be no logical principles governing it.) Field objects to a similar argument from Wright, because “inconsistent with p” is not logical. Inconsistency is not logical: for a start it can only be had by sentences, so it is not topic neutral.

The way of construing \neg p that makes it different from Wright’s argument, and allegedy problematic, is to construe \neg p as schematic for a large disjunction. The symbol \neg does not actually belong to the language at all – writing \neg p is just a metalinguistic shorthand for a very long disjunction, a disjunction that will change, depending in each case, on p. Treating it as such guarantees that (1) and (2) hold, since when they are expanded out, are just truths about the logic of disjunction and don’t contain \neg at all.

But treating \neg p as schematic for a disjunction means it doesn’t behave like an ordinary connective. For one you can’t quantify into it’s scope. What sentence would \exists x\neg Fx be schematic for? What we want it to mean is that there is some object, a, such that the disjunction of things inconsistent with Fa holds. But there’s no single sentence involved here.

Another crucial shortcoming is that it’s not clear that we can “put a dot” under \neg. That is, define a function which takes the Gödel number of p, to the Gödel number of the disjunction of things inconsistent with p. Firstly there might not be enough Gödel numbers to do this (since we have an uncountable language now!) But secondly, how do we know we can code “inconsistent with p” in arithmetic? Fields logic isn’t recursively axiomatizable (Welch, forthcoming) so it seems like we’re not going to be able to code “inconsistent with p” or the strong negation of p – and thus it seems we’re not going to be able to run the Gödel diagonalisation argument. (I was always asleep in Gödel class so maybe someone can check I’m not missing something here.)

So you can’t get a strongly negated liar sentence through Gödel diagonalisation, but what about indexical self reference? “This sentence is strongly not true” is schematic for a sentence not including “strongly not”, but with a large disjunction instead. However, which disjunction is it? We’re in the same pickle we were in when we tried to quantify into the scope of \neg. In both cases, the disjunction needed to vary depending on the value of the variable “x” or in this case, the indexical “this”.

I can’t say I’ve gotten to the bottom of this, but it’s no longer clear to me how problematic Restall’s argument is for the non classical logician.

h1

Size and Modality

March 25, 2009

There’s this thing that’s been puzzling me for a while now. It’s kind of related to the literature on indefinite extensibility, but the thing that puzzles me has nothing to do with sets, quantification or Russell’s paradox (or at least, not obviously.) I think it is basically a puzzle about infinities, or sizes.

First I should get clear on what I mean by size. Size, as I am thinking about it, is closely related to what set theorists call cardinality. But there are some important differences.

(i) Cardinality is heavily bound up with set theory, whereas I take it that size talk does not commit us to sets. For example, I believe I can truly say there are more regions than open regions of spacetime, even if I’m a staunch nominalist. Think of size talk as analogous to plural quantification: I am not introducing new objects into the domain (sizes/pluralities), I am just quantifying over the existing individuals in a new way.

(ii) Only sets have cardinalities. I believe you can talk about the sizes of proper class sized pluralities.

(iii) Points (i) and (ii) are compatible with a Fregean theory of size. But Fregean sizes, as well as cardinalities, are thought to be had by pluralities (concepts, sets) of individuals in the domain. In particular: every size, is the size of some plurality/set. I reject this. I think there are sizes which no plurality has – I think there could have been more things than there in fact are, and thus, that there are sizes which no plurality in fact has. So sizes are inherently bound up with modality on this view – sizes are had by possible pluralities.

(iv) Frege and the set theorists both believe sizes are individuals. I’m not yet decided on this one, but Frege’s version of Hume’s principle forces the domain to be infinite, which contradicts (i) – that size talk isn’t ontologically committing. Interestingly, the plural logic version of HP is satisfiable on domains of any size – thus size’s can be always be construed as objects, if needs be. But I’m inclined to think that size talk is fundamentally grounded in certain kinds of quantified statements (e.g., “there are countably many F’s”.)

I’m going to mostly ignore (iv) from hereon and talk about sizes like they were objects, because as noted, you can consistently do this if needs be (given global choice.) That said, I can’t adopt HP because of point (iii). It’s built into the notation of HP that every size is the size of some plurality. Furthermore, Hume’s principle entails there is a largest size. (Cardinality theory say there is no largest cardinality, but this is because of an expressive failure on it’s part – proper classes don’t have cardinalities.) However, if we accept the following principle:

  • Necessarily, there could have been more things.

it follows from (iii) that there is no largest size.

I think this is right. It just seems weird and arbitrary to think that there could be this largest size, \kappa. Why \kappa and not 2^\kappa? Clearly, it seems, there are worlds, that have this many things (think of, e.g. Forrest-Armstrong type constructions.) If not, what metaphysical fact could possibly ground this cutoff point?

What I don’t object to is there being a largest size of an actual plurality. I’m fine with arbitrariness, so long as it’s contingent. But to think that there is some size that limits the size of all possible worlds seems really strange. Just to state the existence of a limit seems to commit us to larger sizes – it’s like saying there are sizes which no possible world matches.

Here is a second principle about sizes I really like. Any collection of sizes has an upperbound. This is something that Fregean, and in a certain sense, cardinality theories of size share with me, so I’m not going to spend as long defending it. But intuitively, if you can have possible worlds with domains of sizes \kappa for each \kappa \in S, then there should be a world containing the union of all these domains – a world with at least Sup(S) things.

So this is what I mean by size. Here is the puzzle: this conception of size seems to be inconsistent. To see this we need to formalise a bit further. Take as our primitive a binary relation over sizes, < (informally “smaller than”.) For simplicity, assume we are only quantifying over sizes. Here are some principles. You can ignore 3. and 4. if you want, 1. and 2. are obvious, and 5. and 6. we have just argued for.

  1. \forall x \neg x < x
  2. \forall xyz(x<y<z \rightarrow x<z)
  3. \forall xy(x<y \vee x=y \vee x>y)
  4. \forall xx\exists x(x \prec xx \wedge \forall y(y \prec xx \rightarrow x \leq y))
  5. \forall x \exists y x<y
  6. \forall xx\exists x\forall y(y \prec xx \rightarrow y \leq x)

The first three principles say that < than is a total order, which is pretty much self evident. The fourth says it’s a well order. (The inconsistency to follow doesn’t require (3) or (4).) The fifth encodes the principle that there is no largest size, and the sixth says that every collection of sizes has an upper bound.

These principles are jointly inconsistent: let xx be the plurality of self-identical things. By (6) xx has an upper bound, k. By (5) there is a size larger than k, k<k+. Since k+ is in xx, and k is an upperbound for xx, k+ \leq k. Thus k<k by (2) and logic, which is impossible by (1).

There are roughly three ways out of this usually considered. Fregean theories reject (5), cardinality theory (with unrestricted plural quantifiers) deny (6) and indefinite extensibilists do something funky with the quantifiers (I’ve never really worked out how that helps, but it’s there for completeness.) Also note, the version of (6) restricted to “small” (roughly, “set-sized”) pluralities is consistent.

My own diagnosis is that the above formulation of size theory simply fails to take account of the modal nature of sizes. If we are pretending that sizes are objects at all (which, I think, is also not an innocent assumption), we should remember that just because there could be such a size, doesn’t mean in fact there is such a size. This is the same kind of fallacious reasoning encoded in the Barcan formula and its converse  (this is partly why it is very unhelpful to think of sizes as objects; we are naturally inclined to think of them as abstract, necessarily existing objects.)

Anyway – a natural way to formulate (1)-(6) in modal terms would be in a second order modal logic, perhaps with a primitive second level size comparison relation. For example (1) would be ‘necessarily, if the xx are everything, then there aren’t more xx than xx‘, (2) would be ‘necessarly for all xx, necessarily for all yy, necessarily for all zz, if there are more zz‘s than yy‘s and more yy‘s than zz‘s there are more zz‘s than xx‘s’ and (5) would be ‘necessarily, there could have been more things’. The only problem is, how would we state (6)?

I’ve been toying around with propositional quantification. Let me change the primitives slightly: instead of using \Box p, \Diamond p to talk about possibility and necessity, I’ll interpret them as saying p is true in some/every accessible world with a larger domain than the current world. Also, since I don’t care about anything about a world except the size of it’s domain, let us think of the worlds not as representing maximally specific ways for things to be, but as sizes themselves. Thus the intended models of the theory will be Kripke frames of the following form: \langle W, R \rangle where (i) the transitive closure of R is a well order on W, and (ii) for each w in W, R is a well order on R(w). (We’re going to have to give up S4, so we mustnt assume R is transitive on W, although it’s locally transitive on R(w) for each w in W.) Propositions are sets of worlds, so the range of the propositional quantifiers differ from world to world, since R is non-trivial.

Call R a local well order on W iff it satisfies (i) and (ii). I’m going to assert without defence (for the time being) that the formulae valid over the class of local well orders, will be the modal equivalent of (1)-(4) holding (I expect it would be fairly easy to come up with an axiomatisation of this class directly and that this axiomatisation would correspond to (1)-(4). For example, the complicated one, (4), would correspond to \forall p(\Diamond p \rightarrow \exists q\forall r(\Box(r \rightarrow p) \rightarrow \Box(q \rightarrow \Diamond r))).)

The important thing is that it is possible to state (5) and (6) directly, and, it seems, consistently (although we’ll have to give up on unrestricted S4.) [Note: I may well have made some mistakes here, so apologies in advance.]

  1. \Box p \rightarrow p
  2. \forall pqr(\Diamond(p \wedge \Diamond(q \wedge \Diamond r)) \rightarrow \Diamond(p \wedge \Diamond r))
  3. \forall p(\Diamond p \rightarrow \exists q\forall r(\Box(r \rightarrow p) \rightarrow \Box(q \rightarrow \Diamond r)))
  4. \Box\exists p(p \wedge \Diamond \neg p)
  5. \forall p \Diamond\exists q(q \wedge \neg p)

(I decided halfway through writing this post it was simpler to axiomatise a reflexive well order, so the modal (1)-(4) above don’t correspond as naturally to the original (1)-(4) – I’ll try and neaten this up at some point).

What is slightly striking is the failure of S4. Informally, if I were to have S4 I would be able to quantify over the universal proposition of all worlds, take its supremum by (6), and find a world not in the proposition by (5). This would just be a version of the inconsistency given for the extensional size theory above.

Instead, we have a picture on which worlds can only see a limited number of world sizes – to see the larger sizes you have to move to larger worlds. At no point can you “quantify” over all collections of worlds – so, at least in this sense, the view is quite close to the indefinite extensibility literature. But of course, the non-modal talk is misleading: worlds are really maximally specific propositions, and the only propositions that exist are those in the range of our propositional quantifiers at the actual world – the worlds inaccessible to the actual world in the model should just be thought of as a useful picture for characterising which sentences in the box and diamond language are true at the actual world.

h1

Greatest Philosopher of the 20th-Century?

March 2, 2009

You can find out here.

But seriously: Lewis came second to Wittgenstein? (I could understand how LW might rank top in a poll involving the general public, but the first ranking was supposedly based mostly on the Leiter readership!)

Update: some interesting thoughts on Russell’s ranking here and here.

h1

Fitch’s paradox and self locating belief

February 21, 2009

It’s been a while since I last posted here – which is bad seeing as I’ve had much less going on recently. I hope to return to regular blogging soon!

For now just a little note on something I’ve been thinking about to do with a version of the knowabality principle for rational belief. Back in this post I considered a version of Fitch’s paradox for rational belief, which shows the following believability principle cannot hold in full generality (C stands for rational certainty)

  • (p \rightarrow \Diamond Cp)

Here’s another route to that conclusion if you accept something like Adam Elga’s indifference principle. Suppose p is the proposition that you are in a Dr. Evil like scenario: that (a) you are Dr. Evil and (b) you have just received a message from entirely reliable people on Earth saying they have created an exact duplicate of Dr. Evil, whose situation is epistemically indistinguishable from Dr. Evils (including having him receive a duplicate message like this one) who will be tortured unless Dr. Evil deactivates his super laser. Notice that p includes self locating information.

If you accept Elga’s version of the indifference principle, once you’ve become certain of (b) you’re rationally required to lower your credence that you’re Dr. Evil to 1/2 and give credence 1/2 to the hypothesis that you’re the clone. So suppose for reductio that you could be certain that p. Since p is the conjunction of (a) and (b) you must be certain in both (a) and (b). But this is impossible, since indifference requires anyone who is certain in (b) to give credence 1/2 (or less) to (a).

It is impossible to be certain in p (p is probably unknowable too.) And since p is clearly possibly true, the principle given above is at best contingently true.