h1

B entails that a conjunction of determinate truths is determinate

October 26, 2010

I know it’s been quiet for a while around here. I have finally finished a paper on higher order vagueness which has been  a long time coming, and since I expect it to be in review for quite a while longer I decided to put it online. (Note: I’ll come back to the title of this post in a bit, after I’ve filled in some of the background.)

The paper is concerned with a number of arguments that purport to show that it is always a precise matter whether something is determinate at every finite order. This would entail, for example, that it was always a precise matter whether someone was determinately a child at every order, and thus, presumably, that this is also a knowable matter. But it seems just as bad to be able to know things like “I stopped being a determinate child at every order after 123098309851248 nanoseconds from my birth” as to know the corresponding kinds of things about being a child.

What could the premisses be that give such a paradoxical conclusion? One of the principles, distributivity, says that a (possibly infinite) conjunction of determinate truths is determinate, the other, B, says p \rightarrow \Delta\neg\Delta\neg p. If \Delta^* p is the conjunction of p, \Delta p, \Delta\Delta p, and so on, distributivity easily gives us (1) \Delta^*p \rightarrow\Delta\Delta^* p. Given a logic of K for determinacy we quickly get \Delta\neg\Delta\Delta^*p \rightarrow\Delta\neg\Delta^* p, which combined with \neg\Delta^* p\rightarrow \Delta\neg\Delta\Delta^* p (an instance of B) gives (2) \neg\Delta^* p\rightarrow\Delta\neg\Delta^* p. Excluded middle and (1) and (2) gives us \Delta\Delta^* p \vee \Delta\neg\Delta^* p, which is the bad conclusion.

In the paper I argue that B is the culprit.* The main moving part in Field’s solution to this problem, by contrast, is the rejection of distributivity. I think I finally have a conclusive argument that it is B that is responsible, and that is that B actually *entails* distributivity! In other words, no matter how you block the paradox you’ve got to deny B.

I think this is quite surprising and the argument is quite cute, so I’ve written it up in a note. I’ve put it in a pdf rather than post it up here, but it’s only two pages and the argument is actually only a few lines. Comments would be very welcome.

* Actually a whole chain of principles weaker than B can cause problems, the weakest which I consider being \Delta(p\rightarrow\Delta p)\rightarrow(\neg p \rightarrow \Delta\neg p), which corresponds to the frame condition: if x can see y, there is a finite chain of steps from y back to x each step of which x can see.

h1

Interpreting the third truth value in Kripke’s theory of truth

March 28, 2010

Notoriously, there are many different theories of untyped truth which use Kripke’s fixed point construction in one way or another as their mathematical basis. The core result is that one can assign every sentence of a semantically closed language one of three truth values in a way that \phi and Tr(\ulcorner\phi\urcorner) receive the same value.

However, how one interprets these values, how they relate to valid reasoning and how they relate to assertability is left open. There are classical interpretations in which assertability goes by truth in the classical model which assigns Tr the positive extension of the fixed point, and consequence is classical (Feferman’s theory KF.) There are paraconsistent interpretations in which the middle value is thought of as “true and false”, and assertability and validity go by truth and preservation of truth. There’s also the paracomplete theory where the middle value is understood as neither true nor false and assertability and validity defined as in the paraconsistent case. Finally, you can mix these views as Tim Maudlin does – for Maudlin assertability is classical but validity is the same as the paracomplete interpretation.

In this post I want to think a bit more about the paracomplete interpretations of the third truth value. A popular view, which originated from Kripke himself, is that the third truth value is not really a truth value at all. For a sentenc to have that value is simply for the sentence to be ‘undefined’ (I’ll use ‘truth status’ instead of ‘truth value’ from now on.) Undefined sentences don’t even express a proposition – something bad happens before we can even get to the stage of assigning a truth value. It simply doesn’t make sense to ask what the world would have to be like for a sentence to ‘halfly’ hold.

This view seems to a have a number of problems. The most damning, I think, is the theory’s inability to state this explanation of the third truth status. For example, we can state what it is to fail to express a proposition in the language containing the truth predicate: a sentence has truth value 1 if it’s true, has truth value 0 if it’s negation is true, and it has truth status 1/2, i.e. doesn’t express a proposition, if neither it nor its negation is true.

In particular, we have the resources to say that the liar sentence does not express a proposition: \neg Tr(\ulcorner\phi\urcorner)\wedge\neg Tr(\ulcorner\neg\phi\urcorner). However, since both conjuncts of this sentence don’t express propositions, the whole sentence,  the sentence ‘the liar does not express a proposition’, does not itself express a proposition either! Furthermore, the sentence immediately before this one doesn’t express a proposition either (and neither does this one.) It is never possible to say a sentence doesn’t express a proposition unless you’ve either failed to express a proposition, or you’ve expressed a false proposition. What’s more, we can’t state the fixed point property: we can’t say that the liar sentence has the same truth status as the sentence that says the liar is true since that won’t express a proposition either: the instance of the T-schema for the liar sentence fails to express a proposition.

The ‘no proposition’ interpretation of the third truth value is inexpressible: if you try to describe the view you fail to express anything.

Another interpretation rejects the third value altogether. This interpretation is described in Fields book, but I think it originates with Parsons. The model for assertion and denial is this: assert just the things that get value 1 in the fixed point construction and reject the rest. Thus the sentences  “some sentences are neither true nor false”, “some sentences do not express a proposition” should be rejected as they come out with value 1/2 in the minimal fixed point. As Field points out, though, this view is also expressively limited – you don’t have the resources to say what’s wrong with the liar sentence. Unlike in the previous case where you did have those resources, but you always failed to express anything with them, in this case being neither true nor false is not what’s wrong with the liar since we reject that the liar is neither true nor false. (Although Field points out that while you can classify problematic sentences in terms of rejection, you can’t classify contingent liars where you’d need to say things like ‘if such and such were the case, then s would be problematic’ since this requires an embeddable operator of some sort.)

I want to suggest a third interpretation. The basic idea is that, unlike the second interpretation, there is a sense in which we can communicate that there is a third truth status, and unlike the first, 1/2 is a truth value, in the sense that sentences with that status express propositions and those propositions “1/2-obtain” – if the world is in this state I’ll say the proposition obtails.

In particular, there are three ways the world can be with respect to a proposition: things can be such that the proposition obtains, such it fails, and such that it obtails.

What happens if you find out a sentence has truth status 1/2 (i.e. you find out it expresses a proposition that obtails)? Should you refrain from adopting any doxastic attitude, say, by remaining agnostic? I claim not – agnosticism comes about when you’re unsure about the truthvalue of a sentence, but in this case you know the truth value. However it is clear you should neither accept nor reject it either – these are reserved for propositions that obtain and fail respectively. It seems most natural on this view to introduce a third doxastic attitude: I’ll call it receptance. When you find out a sentence has truth value 1 you accept, when you find out is has value 0 you reject and when you find out it has value 1/2 you recept. If haven’t found out the truth value yet you should withold all three doxastic attitudes and remain agnostic.

How do you communicate to someone that that the liar has value 1/2? Given that the sentences which says the liar has value 1/2 also has value 1/2, you should not assert that the liar has value 1/2. You assert things in the hopes that your audience will accept them, and this clearly not what you want if the thing you want to communicate has value 1/2. Similarly you deny things in the hope that your audience will reject them. Thus this view calls for a completely new kind of speech act, which I’ll call “absertion”, that is distinct from the speech acts of assertion and denial. In a bivalent setting the goal of communication is to make your audience accept true things and reject false things, and once you’ve achieved that your job is done. However, in the trivalent setting there is more to the picture: you also want your audience to recept things that have value 1/2, which can’t be achieved by asserting them or denying them. The purpose of communication is to induce *correct* doxastic state in your audience, where a doxastic state of acceptance, rejection or receptance in s is correct iff s has value 1, 0 or 1/2 respectively. If you instead absert sentences like the liar, and your audience believes you’re being cooperative, they will adopt the correct doxastic attitude of reception.

This, I claim, all follows quite naturally from our reading of 1/2 as a third truth value. The important question is: how does this help us with the expressive problems encountered earlier? The idea is that in this setting we can *correctly* communicate our theory of truth using the speech acts of assertion, denial and absertion, and we can have correct beliefs about the world by also recepting some sentences as well as accepting and rejecting others. The problem with the earlier interpretations was that we could not correctly communicate the idea that the liar has value 1/2 because it was taken for granted that to correctly communicate this to someone involved making them accept it. On this interpretation, however, to correctly express the view requires only that you absert the sentences which have value 1/2. Of course any sentence that says of another sentence that it has value 1/2 has value 1/2 itself, so you must absert, not assert, those too. But this is all to be expected when the obective of expressing your theory is to communicate it correctly, and that communicating correctly involves more that just asserting truthfully.

Assertion in this theory behaves much like it does in the paracomplete theory that Field describes, however some of the things Field suggests we should reject we should absert instead (such as the liar.) To get the idea, let me absert some rules concerning absertion:

  • You can absert the liar, and you can absert that the liar has value 1/2.
  • You can absert that every sentence has value 1, 0 or 1/2.
  • You ought to absert any instance of a classical law.
  • Permissable absertion is not closed under modus ponens.
  • If you can permissibly absert p, you can permissibly absert that you can permissibly absert p.
  • If you can absert p, then you can’t assert or deny p.
  • None of these rules are assertable or deniable.

(One other contrast between this view and the no-proposition view is that it sits naturally with a more truth functionally expressive logic. The no-proposition view is often motivated by the motivation for the Kleene truth functions: a three valued function that behaves like a particular two valued truth function on two valued inputs, and has value 1/2 when the corresponding two valued function could have had both 1 or 0 depending on how one replaced 1/2 in the three valued input with 1 or 0. \neg, \vee is expressively adequate with respect to Kleene truth functions defined as before. However, Kripke’s construction works with any monotonic truth function (monotonic in the ordering that puts 1/2 and the bottom and 1 and 0 above it but incomparable to each other) and \neg, \vee are not expressively complete w.r.t the monotonic truth functions. There are monotonic truth functions that aren’t Kleene truth functions, such as “squadge”, that puts 1/2 everywhere that Kleene conjunction and disjunction disagree, and puts the value they agree on elsewhere. Squadge, negation and disjunction are expressively complete w.r.t monotonic truth functions.)

h1

Is ZFC Arithmetically Sound?

February 12, 2010

I recently stumbled across this fascinating discussion on FOM. The question at stake: why should we believe that ZFC doesn’t prove any false statements about numbers? That is, while of course we should believe ZFC is consistent and \omega-consistent, that is no reason to expect it not to prove false things: perhaps even false things about numbers that we could, in some sense, verify.

Of course – the “in some sense” is important, as Harvey Friedman stressed in one of the later posts. After all ZFC can prove everything PA can, so whatever the false consequences of ZFC are, we couldn’t prove them from PA. There were a number of interesting suggestions. For example it might prove the negation of something we have lots of evidence for (e.g. something like Goldbach’s conjecture where we have verified lots of its instances – except unlike GC it can’t be \Pi^0_1.) Or perhaps it would prove there was some Turing machine that would halt, but which never would if we were to make it. There’s a clear sense that it’s false that the TM halts, even though we can’t verify it conclusively.

Anyway, while reading all this I became quite a lot less sure about some things I used to be pretty certain about. In particular, a view I had never thought worth serious consideration: the view that there isn’t a determinate notion of being ‘arithmetically sound’. Or more transparently, the view that there’s no such thing as *the* standard model of arithmetic, i.e. there are lots of equally good candidate structures for the natural numbers, and that there’s no determinate notion of true and false for arithmetical statements. Now I have given it fair consideration I’m actually beginning to be swayed by it. (Note: this is not to say I don’t think there’s a matter of fact about statements concerning certain physical things like the ordering of yearly events in time, or whether a physical Turing machine will eventually halt. It’s just I think this could turn out to be contingent. It’ll depend, I’m guessing, on the structure of time and the structure of space in which the machine tape is embedded. Thus, on this view, arithmetic is like geometry – there is no determinate notion of true-for-geometry, but there is is a determinate notion of true of the geometry of our spacetime, which actually turns out to be a weird geometry.)

Something that would greatly increase my credence in this view would be if we could find a pair of “mysterious axioms”, (MA1) and (MA2), which had the following properties. (a) they are like the continuum hypothesis, (CH), in that they are independent of our currently accepted set theory, say ZFC plus large cardinals, and, like (CH), it is unclear how things would have to be for it to be true or false. (b) unlike (CH) and its negation, (MA1) and (MA2) its negation disagree about some arithmetical statement.

Let me first say a bit more about (a). On some days of the week I doubt there are any sets, or that there are as many things as there would need to be for there to be sets. However I believe in plural quantification, and believe that if there *were* enough things, then we could generate models for ZFC just by considering pluralities of ordered pairs. But even given all that I don’t think I know what things would have to be like for (CH) to be true. If there is a plurality of ordered pairs that satisfies ZF(C), then there is one that satisfies ZFC+CH, namely Gödel’s constructible universe, and also one that doesn’t satisfy CH. So even given we have all these objects, it is not clear which relation should represent membership between them. I can only think of two reasons to think there is a preferred relation: (1) if there were a perfectly natural relation, membership, between these objects which somehow set theorists are able latch onto and intuit things about from their armchair or (2) there is only one such relation (up to isomorphism anyway) compatible with the linguistic practices of set theorists. Neither of these seem particularly plausible to me.

Now let me say a bit about (b). Note firstly that Con(ZFC) is an arithmetical statement independent of ZFC. However it is not like (CH) in that we have good reason to believe its negation is false. And more to the point, its negation is inconsistent with there being any inaccessibles. (MA) is going to have to be subtler than that.

It is also instructive to consider the following argument that ZFC *is* arithmetically sound. Suppose it’s determinate that there’s an inaccessible (a reasonable assumption, if we grant there are enough things, and that the truth of these claims are partially fixed by the practices of set theorists.) Let \kappa be the first one. Then V_\kappa is a model for ZFC which models every true arithmetical statement (because the natural numbers are an initial segment of \kappa [edit: and arithmetical statements are absolute].) So ZFC cannot prove any false arithmetical statement. That is, determinately, ZFC is arithmetically sound. And all we’ve assumed is that it’s determinate that there’s an inaccessible.

Now I find this argument convincing. But clearly this doesn’t prove that every arithmetic statement is determinate. All it shows is that arithmetic is determinate if ZFC is. But (CH) has already brought the antecedent into doubt! So although V_\kappa determinately decides every arithmetical statement correctly, it is still indeterminate what V_\kappa makes true. That is, both (MA1) and (MA2) disagree not only over some arithmetical statement, but also over whether V_\kappa makes that statement true.

Now maybe there isn’t anything like (MA1/2). Maybe we will always be able to find a clear reason to accept or reject any set theoretic statement that has consequences for arithmetic. But I see absolutely no good reason to think that there won’t be anything like (MA1/2). To make it more vivid, there are these really really weird results from Harvey Friedman showing that simple combinatorial principles about numbers imply all kinds of immensely strong things about large cardinals. While these simple principles about numbers look determinate they imply highly sophisticated principles that are independent of ZFC. I see no reason why someone might not find a simple number theoretic principle that implies another continuum hypothesis type statement. And in the absence of face value platonism – *a lot* of objects, and a uniquely preferred membership (perhaps natural) relation between them – it is hard to think how these statements could be determinate.

h1

Truth as an operator and as a predicate

November 5, 2009

Suppose we add to the propositional calculus a new unary operator, T, whose truth table is just the trivial one that leaves the truth value of its operand untouched. By adding

  • (Tp \leftrightarrow p)

to a standard axiomatization of the propositional calculus we completely fix the meaning of T. Moreover this is a consistent classical account of truth that gives us a kind of unrestricted “T-schema” for the truth operator.

On the face of it, then, it seems that if we treat truth as an operator operating on sentences rather than a predicate applying to names of sentences we somehow avoid the semantic paradoxes. But this seems almost like magic: both ways of talking about truth supposed to be expressing the same property – how could a grammatical difference in their formulation be the true source of the paradox?

My gut feeling is that there isn’t anything particularly deep about the consistency of the operator theory of truth: it just boils down to an accidental grammatical fact about the kinds of languages we usually speak. The grammatical fact is this. One can have syntactically simple expressions of type e but not of type t. Without the type theory jargon this just means we can have names that can be the argument of a predicate but not “names” that can be the argument of an operator. Call these latter kind of expressions “name*s”. If p is a name* then \neg p is grammatically well formed and is evaluated as the same as \neg \phi where \phi is whatever sentence p refers* to. If pick p so that it refers* to “\neg p” then we are in just the same predicament we were in the case where we were considering names and treating truth like a predicate. One could simply pick a constant and stipulate that it refers to the sentence “~Tr(c)”.

We could make this a little more precise. By restricting our attention to languages without name*s we’re remaining silent about propositions that we could have expressed if we removed the restriction. Indeed, there is a natural translation between operator talk (in the propositional language with truth described at the beginning) and predicate talk. So, on the looks of it, it seems we could make exactly the same move in the predicate case: accept only sentences that are translations of sentences we accept. The natural translation I’m referring to is this:

  • p^* \mapsto p
  • (\phi \wedge \psi)^* \mapsto (\phi^*\wedge\psi^*)
  • (\neg \phi)^* \mapsto \neg \phi^*
  • (T\phi)^* \mapsto Tr(\ulcorner\phi^*\urcorner)

Here’s a neat little fact which is quite easy to prove. Let M be a model of the propositional calculus (a truth value assignment.)

Theorem. \phi is the translation a true formula in M if and only if \phi appears in Kripke’s minimal fixedpoint construction using the weak Kleene valuation with ground model M.

Note that, because we don’t have quantifiers, the construction tapers out at \omega so we can prove the right-left direction by induction over the finite initial stages of the construction. Left-right is an induction over formula complexity.

If the rule is to simply reject all sentences which aren’t translations of an operator sentence then it appears that the neat classical operator view is really just the well known non-classical view based on the weak Kleene valuation scheme. It is well known that the latter only appears to be classical when we restrict attention to grounded formulae; it seems the appearance is just as shallow for the former view.

Incidentally, note that there’s no natural way to extend this result to languages with quantifiers. This is because there’s no “natural” translation between the propositional calculus with propositional quantifiers and a quantified language with the truth predicate capable of talking about its own syntax.

h1

Rigid Designation

October 23, 2009

Imagine the following set up. There are two tribes, A and B, who up until now have never met. It turns out that tribe A speaks English as we speak it now. However, tribe B speaks English* – a language much like English except it doesn’t contain the names “Aristotle” or “Plato”, and contains two new names, “Fred” and “Ned”.

Suppose now that these two tribes eventually meet and learn each others language. In particular tribe A and B come to agree that the following holds in the new expanded language: (1) necessarily, if Socrates was a philosopher, Fred was Aristotle and Ned was Plato, and (2) necessarily, if Socrates was never a philosopher, Fred was Plato and Ned was Aristotle.

Now we introduce to both tribes some philosophical vocabulary: we tell them what a possible world is, what it means for a name to designate something at a possible world. Both tribes think they understand the new vocabulary. We tell them a rigid designator is a term that designates the some object at every possible world.

Before meeting tribe B, tribe A will presumably agree with Kripke in saying that “Aristotle” and “Plato” are rigid designators, and after learning tribe B’s language will say that “Fred” and “Ned” are non-rigid (accidental) designators.

However tribe B will, presumably, say exactly the opposite. They’ll say that “Aristotle” is a weird and gruesome name that designates Fred in some worlds and Ned in others. Indeed whether “Aristotle” denotes Fred or Ned depends on whether Socrates is a philosopher or not, and, hence, tribe A are speaking a strange and unnatural language.

Who is speaking the most natural language is not the important question. My question is rather, how do we make sense of the notion of ‘rigid designation’ without having to assume English is privileged in some way over English*. And I’m beginning to think we can’t.

The reason, I think, is that the notion of rigid designation (and, incidentally, lots of other things philosophers of modality talk about) cannot be made sense of in the simple modal language of necessity and possibility – the language we start off with before we introduce possible worlds talk. However the answer to whether or not a name is a rigid designator makes no difference to our original language. For any set of true sentences in the simple modal language involving the name “Aristotle” I can produce you two possible worlds models that makes those sentences true: one that makes “Aristotle” denote the same individual in every world and the other which doesn’t.* If this is the case, how is the question of whether a name is a rigid designator ever substantive? Why do we need this distinction? (Note: Kripke’s arguments against descriptivism do not require the distinction. They can be formulated in pure necessity possibility talk.)

To put it another way, by extending our language to possible world/Kripke model talk we are able to postulate nonsense questions: Questions that didn’t exist in our original language but do in the extended language with the new technical vocabulary. An extreme example of such a question: is the denotation function a set of Kuratowski or Hausdorff ordered pairs? These are two different, but formally irrelevant, ways of constructing functions from sets. The question has a definite answer, depending on how we construct the model, but it is clearly an artifact of our model and corresponds to nothing in reality.

Another question which is well formed and has a definite answer in Kripke model talk: does the name ‘a’ denote the same object in w as in w’. There seems to be no way to ask this question in the original modal language. We can talk about ‘Fred’ necessarily denoting Fred, but we can’t make the interworld identity comparison. And as we’ve seen, it doesn’t make any difference to the basic modal language how we answer this question in the extended language.

[* These models will interpret names from a set of functions, S, from worlds to individuals at that world and quantification will also be cashed out in terms of the members of S. We may place the following constraint on S to get something equivalent to a Kripke model: for f, g \in S, if f(w) = g(w) for some w then f=g.

One might want to remove this constraint to model the language A and B speak once they’ve learned each others language. They will say things like: Fred is Aristotle, but they might have been different. (And if they accept existential generalization they’ll also say there are things which are identical but might not have been!)]

h1

Links

October 19, 2009

A real post should be on the way soon. A few links in the meanwhile

  • If you haven’t seen it already there is a petition about allocating research funds on the basis of “impact” rather than academic merit. Please sign.
  • JC Beall points me to his new webpage. Lots of interesting looking papers.
  • I know it’s done the rounds already but this xkcd comic really made me chuckle!
h1

Precisifications

August 19, 2009

I’ve been wondering just how much content there is to the claim that vagueness is truth on some but not all acceptable ways of making the language precise. It is well known that both epistemicists and supervaluationists accept this, so the claim is clearly not substantive enough to distinguish between *these* views. But does it even commit us to classical logic? Does it rule out *any* theory of vagueness.

If one allows quantification over non-classical interpretations it seems clear that this doesn’t impose much of a constraint. For example, if we include among our admissible interpretations Heyting algebras, or Lukasiewicz valuations, or what have you, it seems clear that we needn’t (determinately) have a classical logic. Similar points apply if one allowed non-classically described interpretations; interpretations that perhaps use bivalent classical semantics, but are constructed from sets for which membership may disobey excluded middle (e.g., the set of red things.)

In both cases we needn’t get classical logic. But this observation seems trite; and besides they’re not really ‘ways of making the language precise’. A precise non-bivalent interpretation is presumably one in which every atomic sentence letter receives value 1 or 0, thus making it coincide with a classical bivalent interpretation – and presumably no vaguely described precisification is a way of making the language completely precise either.

So a way of sharpening the claim I’m interested in goes as follows: vagueness is truth on some but not all admissible ways of making the language precise, where ‘a way of making the language precise’ (for a first-order language) is a Tarskian model constructed from crisp sets. A set X is crisp iff \forall x(\Delta x\in X \vee \Delta x \not\in X). This presumably entails \forall x(x\in X \vee x\not\in X) which is what crispness amounts to for a non-classical logician. An admissible precisification is defined as follows

  • v is correct iff the schema v \models \ulcorner \phi \urcorner \leftrightarrow \phi holds.
  • v is admissible iff it’s not determinately incorrect.

Intuitively, being correct means getting everything right – v is correct when truth-according-to-v obeys the T-schema. Being admissible means not getting anything determinately wrong – i.e., not being determinately incorrect. Clearly this is a constraint on a theory of vagueness, not an account. If it were an account of vagueness it would be patently circular as both ‘crisp’ and ‘admissible’ were defined in terms of ‘vague’.

Now that I’ve sharpened the claim, my question is: just how much of a constraint is this? As we noted, this seems to be something that every classicist can (and probably should) hold, whether they read \nabla as a kind of ignorance, semantic indecision, ontic indeterminacy, truth value gap, context sensitivity or as playing a particular normative role with respect to your credences, to name a few. Traditional accounts of supervaluationism don’t really say much about how we should read \nabla, so the claim that vagueness is truth on some but not all admissible precisifications doesn’t say very much at all.

But what is even worse is that even non-classical logicians have to endorse this claim. I’ll show this is so for the Lukasiewicz semantics but I’m pretty sure it will generalise to any sensible logic you’d care to devise. [Actually, for a technical reason, you have to show it’s true for Lukasiewicz logic with rational constants. This is no big loss, since it’s quite plausible that for every rational in [0,1] some sentence of English has that truth value: e.g. the sentences “x is red” for x ranging over shades in the spectrum between orange and red would do.]

Supposing that \Delta \phi \leftrightarrow \forall v(admissible(v) \rightarrow v \models \ulcorner \phi \urcorner) has semantic value 1, you can show, with a bit of calculation, that this requires that \delta \|\phi\| = inf_{v\not\models \phi}(\delta(1-inf_{v \models \psi}\|\psi\|)), where \delta is the function interpreting \Delta. Assuming that \delta is continuous this simplifies to: \delta\|\phi\| = \delta(1-sup_{v\not\models \phi}inf_{v\models \psi}\|\psi\|). Now since no matter what v is, so long as v \not\models \phi, we’re going to get that inf_{v \models \psi}\|\psi\| \leq \|\neg\phi\|, since v is classical (i.e. v \models \neg\phi.) But since we added all those rational constants the supremum of all these infs is going to be \|\neg\phi\| itself. So \|\phi\| = 1-sup_{v\not\models \phi}inf_{v\models \psi}\|\psi\| no matter what.

So if one assumes that \delta is continuous it follows that determinacy is truth in every admissible precisification (and that vagueness is truth in some but not all admissible precisifications.) The claim that \delta should be continuous amounts to the claim that a conjunction of determinate truths is determinate, which as I’ve argued before, cannot be denied unless one either denies that infinitary conjunction is precise or that vagueness is hereditary.

h1

Vagueness and uncertainty

June 17, 2009

My BPhil thesis is finally finished so I thought I’d post it here for anyone who’s interested.

h1

Unrestricted Composition: the argument from the semantic theory of vagueness?

May 14, 2009

I’ve seen the following claim made quite a lot in and out of print, so I’m wondering if I’m missing something. The claim is that Lewis’s argument for unrestricted composition relies on a semantic conception of vagueness. In particular, people seem to think epistemicists can avoid the argument.

Maybe I’m reading Lewis’s argument incorrectly, but I can’t see how this is possible. The argument seems to have three premisses

  1. If a complex expression is vague, then one of it’s constituents is vague.
  2. Neither the logical constants, nor the parthood relation are vague.
  3. Any answer to the special composition question that accords with intuitions must admit vague instances of composition.

By 3. one has that there (could be) a vague case of fusion: suppose it’s vague whether the xx fuse to make y. Thus it must be vague whether or not \forall x(x \circ y \leftrightarrow \exists z(z \prec xx \wedge z \circ x)). By 1. this means either parthood, or one of the logical constants is vague, which contradicts 2.

I can’t see any part of the argument that requires me to read `vague’ as `semantically indeterminate’. These seem to be all plausible principles about vagueness, and if, say, epistemicism doesn’t account for one of these principles, so much the worse for epistemicism.

That said, I think epistemicists should be committed to these principles. Since it would be a pretty far off world where we used English non-compositionally, the metalinguistic safety analysis of vagueness ensures that 1. holds. Epistemicists, like anyone else, think that the logical constants are precise. Parthood always was the weak link in the argument, but one might think you could vary usage quite a bit without changing the meaning of parthood since it refers to a natural relation, and is a reference magnet. Obviously the conclusion that the conditions for composition to occur are sharp isn’t puzzling for an epistemicist. But epistemicists think that vagueness is a much stronger property than sharpness (the latter being commonplace), and the conclusion that circumstances under which fusion occurs do not admit vague instances should be just as bad for an epistemicist as for anyone else who takes a medium position on the special composition question.

The most I can get from arguments that epistemicism offers a way out is roughly: “Epistemicists are used to biting bullets. Lewis’s argument requires you to bite bullets. Therefore we should be epistemicists.” Is this unfair?

h1

Truth Functionality

May 4, 2009

I’ve been thinking a lot about giving intended models to non-classical logics recently, and this has got me very muddled about truth functionality.

Truth functionality seems like such a simple notion. An n-ary connective, \oplus, is truth functional just in case the truth value of \oplus(p_1, \ldots, p_n) depends only on the truth values of p_1, \ldots, p_n.

But cashing out what “depends” means here is harder than it sounds. Consider, for example, the following (familiar) connectives.

  • |\Box p| = T iff, necessarily, |p| = T.
  • |p \vee q| = T iff |p| = T or |q| = T.

Why, in the second example but not the first, does the truth value of \Box p depend on the truth value of p? They’ve both been given in terms of the truth value of p. It would be correct, but circular, to say that the truth value of \Box p doesn’t depend on the truth value of p, because it’s truth value isn’t definable from the truth value of p using only truth functional vocabulary in the metalanguage. But clearly this isn’t helpful – for we want to know what counts as truth functional vocabulary whether in the metalanguage or anywhere. For example, what distinguishes the first from the second example. To say that \vee is truth functional and \Box isn’t because “or” is truth functional and “necessarily” isn’t, is totally unhelpful.

Usually the circularity is better hidden than this. For example, you can talk about “assignments” of truth values to sentence letters, and say that if two assignments agree on the truth values of p_1, \ldots, p_n then they’ll agree on \oplus(p_1, \ldots, p_n). But what are “assignments” and what is “agreement”? One could simply stipulate that assignments are functions in extension (sets of ordered pairs) and that f and g agree on some sentences if f(p)=g(p) for each such sentence p.

But there must be more restrictions that this: presumably the assignment that assigns p and q F and p \vee q T is not an acceptable assignment. There are assignments which give the same truth values to p and q, but different truth values to p \vee q, making disjunction non truth functional. Thus we must restrict ourselves to acceptable assignments; assignments which preserve truth functionality of the truth functional connectives.

Secondly, there needs to be enough assigments. The talk of assignments is only ok if there is an assignment corresponding to the intended assignment of truth values to English sentences. I beleive that it’s vague whether p, just in case it’s vague whether “p” is true (this follows from the assertion that the T-schema is determinate.) Thus if there’s vagueness in our langauge, we had better admit assignments such that it can be vague whether f(p)=T. Thus the restriction to precise assignments is not in general OK. Similarly, if you think the T-schema is necessary, the restriction of assignments to functions in extension is not innocent either – e.g., if p is true but not necessary, we need an assignment such that f(p)=T and that possibly f(p)=F.

Let me take an example where I think it really matters. A non-classical logician, for concreteness take a proponent of Lukasiewicz logic, will typically think there are more truth functional connectives (of a given arity) than the classical logician. For example, our Lukasiewicz logician thinks that the conditional is not definable from negation and disjunction. (NOTE: I do not mean truth functional on the continuum of truth values [0, 1] – I mean on {T, F} in a metalanguage where it can be vague that f(p)=T.)) “How can this be?” you ask, surely we can just count the truth tables: there are 2^{2^n} truth functional n-ary connectives.

To see why it’s not so simple consider a simple example. We want to calculate the truth table of p \rightarrow q.

  • p \rightarrow q: reads T just in case the second column reads T, if the the first column does.
  • p \vee q: reads T just in case the first or the second column reads T.
  • \neg p: reads T if the first column doesn’t read T.

The classical logician claims that the truth table for p \rightarrow q should be the same as the truth table for \neg p \vee q. This is because she accepts the equivelance between the “the first column is T if the second is” and “the second column is T or the first isn’t” in the metalanguage. However the non-classical logician denies this – the truth values will differ in cases where it is vague what truth value the first and second columns read. For example, if it is vague whether both columns read T, but the second reads T if the second does (suppose the second column reads T iff 87 is small, and the second column reads T iff 88 is small), then the column for \rightarrow will determinately read T. But the statement that \neg p \vee q reads T will be equivalent to an instance of excluded middle in the metalanguage which fails. So it will be vague in that case whether it reads T.

The case that \rightarrow is truth functional for this non-classical logician seems to me pretty compelling. But why, then, can we not make exactly the same case for the truth functionality of \Box p? I see almost no disanalogy in the reasoning. Suppose I deny that negation and the truth operator are the only unary truth functional connectives, I claim \Box p is a further one. However, the only cases where negation and the truth operator come apart from necessity is when it is contingent what the first column of the truth table reads.

I expect there is some way of unentangling all of this, but I think, at least, that the standard explanations of truth functionality fail to do this.

h1

Field on Restall’s Paradox

April 23, 2009

I’ve been casually reading Field’s “Saving Truth from Paradox” for some time now. I think it’s a fantastic book, and I highly recommend it to anyone interested in the philosophy of logic, truth or vagueness.

I’ve just read Ch. 21 where he discusses a paradox presented in Restall 2006. The discussion was very enlightening for me, since I had often thought this paradox to be fatal to non-classical solutions to the liar. But although Fields discussion convinced me Restall’s argument wasn’t as watertight as I thought it was, I was still left a bit uneasy. (I think there is something wrong with Restall’s argument that Field doesn’t consider, but I’ll come to that.)

Before I continue, I should state the paradox. The problem is that if one has a strong negation in the language, \neg, one can generate a paradoxical liar sentence which says of itself that it’s strongly not true. Strong negation has the following properties which ensures that that last sentence is inconsistent:

  1. p, \neg p \models \bot
  2. If \Gamma , p \models \bot then \Gamma \models \neg p

Roughly, the strong negation of p is the weakest proposition inconsistent with p – the first condition guarantees that it’s inconsistent with p, the second that it’s the weakest such proposition. It’s not too hard to see why having such a connective will cause havoc.

Restall’s insight (which was originally made to motivate a “strong” conditional, but it amounts to the same thing) was that one can get such a proposition by brute force: the weakest proposition inconsistent with p is equivalent to the disjunction of all propositions inconsistent with p. Thus, introducing infinitary disjunction into the language, we may just “define” \neg p to be \bigvee \{q \mid p \wedge q \models \bot \}. Each disjunct is inconsistent with p so the whole disjunction must be inconsistent with p, giving us the first condition. If q is inconsistent with p, then q is one of the disjuncts in \neg p so q entails \neg p, giving us (more or less) the second condition.

An initial problem Field points out is that this definition is horribly impredicative – \neg p is inconsistent with p, so \neg p must be one of it’s own disjuncts. Field complains that such non-well founded sentences give rise to paradoxes even without the truth predicate, for example, the sentence that is it’s own negation. (I personally don’t find these kinds of languages too bad, but maybe that’s best left for another post.) This problem is overcome since you can run a variant of the argument by only disjoining atomic formulae so long as you have a truth predicate.

The second point, Field’s supposed rebuttal of the argument, is that to specify a disjunction by a condition, F say, on the disjuncts, you must first show F isn’t vague or indeterminate, or else you’ll end up with sentences such that it is vague/indeterminate what their components are. Allowing such sentences means they can enter into vague/indeterminate relations of validity – for example, it is vague whether a sentence such that it is vague whether it has “snow is white” as a conjunct entails “snow is white”. But the property F, in this case, is the property of entailing a contradiction if conjoined with p. Thus to assess whether F is vague/indeterminate or not, we must ask if entailment can ever be vague. But to do this we must determine whether there are sentences in the language such that it is indeterminate what their components are. Since the language contains the disjunction of the F’s, this requires us to determine whether F is vague – so we have gone in a circle.

Clearly something weird is going on. That said, I don’t quite see how this observation refutes the argument. It’s perfectly consistent with what’s been said above that entailment for the expanded language with infinitary disjunction is precise, that there is a precise disjunction of the things inconsistent with p, and that Restall’s argument goes through unproblematically. It’s also consistent that there *are* vague cases of entailment – but that the two conditions for strong negation above determinately obtain (there are some subtle issues that must be decided here, e.g., is “p and q” determinately distinct from the sentence that has p as its first conjunct, but only has q as its second conjunct indeterminately.)

Even so, I think there are a couple of problems with Restall’s argument. The first is a minor problem. To define the relevant disjunction, we must talk about the property of “entailing a contradiction if conjoined with p”. But to do this we are treating “entails” like it was a connective in the language. However, one of Fields crucial insights is that “A entails B” is not an assertion of some kind of implication holding between A and B, but rather the conditional assertion of A on B. “entails” cannot be thought of like a connective. For one thing, connectives are embeddable, whereas it doesn’t make much sense to talk of embedded conditional assertions. Secondly, a point which I don’t think Field makes explicit, is that it is crucial that “entails” doesn’t work like an embeddable connective, otherwise one could run a form of Curry’s paradox using entailment instead of the conditional.

This not supposed to be a knockdown problem. After all, so what if you can’t *define* strong negation, there is, nonetheless, this disjunction whose disjuncts are just those propositions inconistent with p. We may not be able to define it or refer to it, but God knows which one it is all the same.

The real problem, I think, is the following. How are we construing \neg p? Is it a new connective in the language, stipulated to mean the same as “the disjunction of those things inconsistent with p”? If it is, how do we know it is a logical connective? (If \neg weren’t logical neither (1) nor (2) would hold, since there would be no logical principles governing it.) Field objects to a similar argument from Wright, because “inconsistent with p” is not logical. Inconsistency is not logical: for a start it can only be had by sentences, so it is not topic neutral.

The way of construing \neg p that makes it different from Wright’s argument, and allegedy problematic, is to construe \neg p as schematic for a large disjunction. The symbol \neg does not actually belong to the language at all – writing \neg p is just a metalinguistic shorthand for a very long disjunction, a disjunction that will change, depending in each case, on p. Treating it as such guarantees that (1) and (2) hold, since when they are expanded out, are just truths about the logic of disjunction and don’t contain \neg at all.

But treating \neg p as schematic for a disjunction means it doesn’t behave like an ordinary connective. For one you can’t quantify into it’s scope. What sentence would \exists x\neg Fx be schematic for? What we want it to mean is that there is some object, a, such that the disjunction of things inconsistent with Fa holds. But there’s no single sentence involved here.

Another crucial shortcoming is that it’s not clear that we can “put a dot” under \neg. That is, define a function which takes the Gödel number of p, to the Gödel number of the disjunction of things inconsistent with p. Firstly there might not be enough Gödel numbers to do this (since we have an uncountable language now!) But secondly, how do we know we can code “inconsistent with p” in arithmetic? Fields logic isn’t recursively axiomatizable (Welch, forthcoming) so it seems like we’re not going to be able to code “inconsistent with p” or the strong negation of p – and thus it seems we’re not going to be able to run the Gödel diagonalisation argument. (I was always asleep in Gödel class so maybe someone can check I’m not missing something here.)

So you can’t get a strongly negated liar sentence through Gödel diagonalisation, but what about indexical self reference? “This sentence is strongly not true” is schematic for a sentence not including “strongly not”, but with a large disjunction instead. However, which disjunction is it? We’re in the same pickle we were in when we tried to quantify into the scope of \neg. In both cases, the disjunction needed to vary depending on the value of the variable “x” or in this case, the indexical “this”.

I can’t say I’ve gotten to the bottom of this, but it’s no longer clear to me how problematic Restall’s argument is for the non classical logician.

h1

Size and Modality

March 25, 2009

There’s this thing that’s been puzzling me for a while now. It’s kind of related to the literature on indefinite extensibility, but the thing that puzzles me has nothing to do with sets, quantification or Russell’s paradox (or at least, not obviously.) I think it is basically a puzzle about infinities, or sizes.

First I should get clear on what I mean by size. Size, as I am thinking about it, is closely related to what set theorists call cardinality. But there are some important differences.

(i) Cardinality is heavily bound up with set theory, whereas I take it that size talk does not commit us to sets. For example, I believe I can truly say there are more regions than open regions of spacetime, even if I’m a staunch nominalist. Think of size talk as analogous to plural quantification: I am not introducing new objects into the domain (sizes/pluralities), I am just quantifying over the existing individuals in a new way.

(ii) Only sets have cardinalities. I believe you can talk about the sizes of proper class sized pluralities.

(iii) Points (i) and (ii) are compatible with a Fregean theory of size. But Fregean sizes, as well as cardinalities, are thought to be had by pluralities (concepts, sets) of individuals in the domain. In particular: every size, is the size of some plurality/set. I reject this. I think there are sizes which no plurality has – I think there could have been more things than there in fact are, and thus, that there are sizes which no plurality in fact has. So sizes are inherently bound up with modality on this view – sizes are had by possible pluralities.

(iv) Frege and the set theorists both believe sizes are individuals. I’m not yet decided on this one, but Frege’s version of Hume’s principle forces the domain to be infinite, which contradicts (i) – that size talk isn’t ontologically committing. Interestingly, the plural logic version of HP is satisfiable on domains of any size – thus size’s can be always be construed as objects, if needs be. But I’m inclined to think that size talk is fundamentally grounded in certain kinds of quantified statements (e.g., “there are countably many F’s”.)

I’m going to mostly ignore (iv) from hereon and talk about sizes like they were objects, because as noted, you can consistently do this if needs be (given global choice.) That said, I can’t adopt HP because of point (iii). It’s built into the notation of HP that every size is the size of some plurality. Furthermore, Hume’s principle entails there is a largest size. (Cardinality theory say there is no largest cardinality, but this is because of an expressive failure on it’s part – proper classes don’t have cardinalities.) However, if we accept the following principle:

  • Necessarily, there could have been more things.

it follows from (iii) that there is no largest size.

I think this is right. It just seems weird and arbitrary to think that there could be this largest size, \kappa. Why \kappa and not 2^\kappa? Clearly, it seems, there are worlds, that have this many things (think of, e.g. Forrest-Armstrong type constructions.) If not, what metaphysical fact could possibly ground this cutoff point?

What I don’t object to is there being a largest size of an actual plurality. I’m fine with arbitrariness, so long as it’s contingent. But to think that there is some size that limits the size of all possible worlds seems really strange. Just to state the existence of a limit seems to commit us to larger sizes – it’s like saying there are sizes which no possible world matches.

Here is a second principle about sizes I really like. Any collection of sizes has an upperbound. This is something that Fregean, and in a certain sense, cardinality theories of size share with me, so I’m not going to spend as long defending it. But intuitively, if you can have possible worlds with domains of sizes \kappa for each \kappa \in S, then there should be a world containing the union of all these domains – a world with at least Sup(S) things.

So this is what I mean by size. Here is the puzzle: this conception of size seems to be inconsistent. To see this we need to formalise a bit further. Take as our primitive a binary relation over sizes, < (informally “smaller than”.) For simplicity, assume we are only quantifying over sizes. Here are some principles. You can ignore 3. and 4. if you want, 1. and 2. are obvious, and 5. and 6. we have just argued for.

  1. \forall x \neg x < x
  2. \forall xyz(x<y<z \rightarrow x<z)
  3. \forall xy(x<y \vee x=y \vee x>y)
  4. \forall xx\exists x(x \prec xx \wedge \forall y(y \prec xx \rightarrow x \leq y))
  5. \forall x \exists y x<y
  6. \forall xx\exists x\forall y(y \prec xx \rightarrow y \leq x)

The first three principles say that < than is a total order, which is pretty much self evident. The fourth says it’s a well order. (The inconsistency to follow doesn’t require (3) or (4).) The fifth encodes the principle that there is no largest size, and the sixth says that every collection of sizes has an upper bound.

These principles are jointly inconsistent: let xx be the plurality of self-identical things. By (6) xx has an upper bound, k. By (5) there is a size larger than k, k<k+. Since k+ is in xx, and k is an upperbound for xx, k+ \leq k. Thus k<k by (2) and logic, which is impossible by (1).

There are roughly three ways out of this usually considered. Fregean theories reject (5), cardinality theory (with unrestricted plural quantifiers) deny (6) and indefinite extensibilists do something funky with the quantifiers (I’ve never really worked out how that helps, but it’s there for completeness.) Also note, the version of (6) restricted to “small” (roughly, “set-sized”) pluralities is consistent.

My own diagnosis is that the above formulation of size theory simply fails to take account of the modal nature of sizes. If we are pretending that sizes are objects at all (which, I think, is also not an innocent assumption), we should remember that just because there could be such a size, doesn’t mean in fact there is such a size. This is the same kind of fallacious reasoning encoded in the Barcan formula and its converse  (this is partly why it is very unhelpful to think of sizes as objects; we are naturally inclined to think of them as abstract, necessarily existing objects.)

Anyway – a natural way to formulate (1)-(6) in modal terms would be in a second order modal logic, perhaps with a primitive second level size comparison relation. For example (1) would be ‘necessarily, if the xx are everything, then there aren’t more xx than xx‘, (2) would be ‘necessarly for all xx, necessarily for all yy, necessarily for all zz, if there are more zz‘s than yy‘s and more yy‘s than zz‘s there are more zz‘s than xx‘s’ and (5) would be ‘necessarily, there could have been more things’. The only problem is, how would we state (6)?

I’ve been toying around with propositional quantification. Let me change the primitives slightly: instead of using \Box p, \Diamond p to talk about possibility and necessity, I’ll interpret them as saying p is true in some/every accessible world with a larger domain than the current world. Also, since I don’t care about anything about a world except the size of it’s domain, let us think of the worlds not as representing maximally specific ways for things to be, but as sizes themselves. Thus the intended models of the theory will be Kripke frames of the following form: \langle W, R \rangle where (i) the transitive closure of R is a well order on W, and (ii) for each w in W, R is a well order on R(w). (We’re going to have to give up S4, so we mustnt assume R is transitive on W, although it’s locally transitive on R(w) for each w in W.) Propositions are sets of worlds, so the range of the propositional quantifiers differ from world to world, since R is non-trivial.

Call R a local well order on W iff it satisfies (i) and (ii). I’m going to assert without defence (for the time being) that the formulae valid over the class of local well orders, will be the modal equivalent of (1)-(4) holding (I expect it would be fairly easy to come up with an axiomatisation of this class directly and that this axiomatisation would correspond to (1)-(4). For example, the complicated one, (4), would correspond to \forall p(\Diamond p \rightarrow \exists q\forall r(\Box(r \rightarrow p) \rightarrow \Box(q \rightarrow \Diamond r))).)

The important thing is that it is possible to state (5) and (6) directly, and, it seems, consistently (although we’ll have to give up on unrestricted S4.) [Note: I may well have made some mistakes here, so apologies in advance.]

  1. \Box p \rightarrow p
  2. \forall pqr(\Diamond(p \wedge \Diamond(q \wedge \Diamond r)) \rightarrow \Diamond(p \wedge \Diamond r))
  3. \forall p(\Diamond p \rightarrow \exists q\forall r(\Box(r \rightarrow p) \rightarrow \Box(q \rightarrow \Diamond r)))
  4. \Box\exists p(p \wedge \Diamond \neg p)
  5. \forall p \Diamond\exists q(q \wedge \neg p)

(I decided halfway through writing this post it was simpler to axiomatise a reflexive well order, so the modal (1)-(4) above don’t correspond as naturally to the original (1)-(4) – I’ll try and neaten this up at some point).

What is slightly striking is the failure of S4. Informally, if I were to have S4 I would be able to quantify over the universal proposition of all worlds, take its supremum by (6), and find a world not in the proposition by (5). This would just be a version of the inconsistency given for the extensional size theory above.

Instead, we have a picture on which worlds can only see a limited number of world sizes – to see the larger sizes you have to move to larger worlds. At no point can you “quantify” over all collections of worlds – so, at least in this sense, the view is quite close to the indefinite extensibility literature. But of course, the non-modal talk is misleading: worlds are really maximally specific propositions, and the only propositions that exist are those in the range of our propositional quantifiers at the actual world – the worlds inaccessible to the actual world in the model should just be thought of as a useful picture for characterising which sentences in the box and diamond language are true at the actual world.

h1

Greatest Philosopher of the 20th-Century?

March 2, 2009

You can find out here.

But seriously: Lewis came second to Wittgenstein? (I could understand how LW might rank top in a poll involving the general public, but the first ranking was supposedly based mostly on the Leiter readership!)

Update: some interesting thoughts on Russell’s ranking here and here.

h1

Fitch’s paradox and self locating belief

February 21, 2009

It’s been a while since I last posted here – which is bad seeing as I’ve had much less going on recently. I hope to return to regular blogging soon!

For now just a little note on something I’ve been thinking about to do with a version of the knowabality principle for rational belief. Back in this post I considered a version of Fitch’s paradox for rational belief, which shows the following believability principle cannot hold in full generality (C stands for rational certainty)

  • (p \rightarrow \Diamond Cp)

Here’s another route to that conclusion if you accept something like Adam Elga’s indifference principle. Suppose p is the proposition that you are in a Dr. Evil like scenario: that (a) you are Dr. Evil and (b) you have just received a message from entirely reliable people on Earth saying they have created an exact duplicate of Dr. Evil, whose situation is epistemically indistinguishable from Dr. Evils (including having him receive a duplicate message like this one) who will be tortured unless Dr. Evil deactivates his super laser. Notice that p includes self locating information.

If you accept Elga’s version of the indifference principle, once you’ve become certain of (b) you’re rationally required to lower your credence that you’re Dr. Evil to 1/2 and give credence 1/2 to the hypothesis that you’re the clone. So suppose for reductio that you could be certain that p. Since p is the conjunction of (a) and (b) you must be certain in both (a) and (b). But this is impossible, since indifference requires anyone who is certain in (b) to give credence 1/2 (or less) to (a).

It is impossible to be certain in p (p is probably unknowable too.) And since p is clearly possibly true, the principle given above is at best contingently true.

h1

Links

January 5, 2009
  • JC Beall has started what he describes as a “logic-leaning philosophy blog” which looks like it should be of interest to readers here when it gets going.
  • Not as recently, Jeff Russell started a new blog which is looking very interesting so far.
  • Lastly, Wolfgang Schwarz has an interesting post on decision theory and probability in EQM over at Wo’s Weblog.
h1

Cardinality and the intuitive notion of size

January 1, 2009

According to mathematicians two sets have the same size iff they can be put in one-one correspondence with one another. Call this Cantor’s principle:

  • CP: X and Y have the same size iff there is a bijection \sigma:X\rightarrow Y

Replace ‘size’ by ‘cardinality’ in the above and it looks like we have a definition: an analytic truth. As it stands, however, CP seems to be a conceptual analysis – or at the very least an extensionally equivalent charaterisation. In what follows I shall call the pretheoretic notion ‘size’ and the technical notion ‘cardinality. CP thus states that two sets have the same size iff they have the same cardinality.

Taken as a conceptual analysis of sizes of sets, as we ordinarily understand it, people often object. For example, according to this definition the natural numbers are the same size as the even numbers, and the same size as the square numbers, and many more sets even sparser than these. This is an objection to the right to left direction of CP.

I’m not inclined to give these intuitions too much weight. In fact, I think the intuitive principles behind these judgements are inconsistent. Here are two principles that seem to be at work: (i) if X is a proper subset of Y then X is smaller than Y, (ii) if by uniformly shifting X you get Y, then X and Y have the same size. For example (i) is appealed to when it’s argued that the set of evens is smaller than the set of naturals. (ii) is appealed to when people argue that the evens and the odds have the same size. Furthermore, both principles are solid when we are dealing with finite sets. However (i) and (ii) are clearly inconsistent. If the evens and the odds have the same size, so do the odds and the evens\{2}. This is just an application of (ii), but intuitively, the evens\{2} stand in exactly the same relation to the odds, as the odds to the evens. By transitivity, the evens and the evens\{2} are the same size – but this contradicts (i) since one is a proper subset of the other.

In fact Gödel gave a very convincing argument for the right to left direction: (a) changing the properties of the elements of a set does not change its size, (b) two sets which are completely indistinguishable have the same size and (c) if \sigma:X \rightarrow Y , each x \in X can morph its properties so that x and \sigma(x) are indistinguishable.  Thus, if \sigma is a bijection, X can be transformed in such a way that it is indiscernable from Y, and must have the same size. (Kenny has a good discussion of this at Antimeta.)

The direction of CP I think there is a genuine challenge to is the left to right. And without it, we cannot prove there is more than one infinite size! (That is, if we said every infinite set had the same size, that would be consistent with the right to left direction of CP alone.)

What I want to do here is justify the left to right direction of CP. The basic idea is to do with logical indiscernability. If two sets have the same size, I claim, they should be logically indiscernable in the following sense: any logical property had by one, is had by the other. Characterising the logical properties as the permutation invariant ones, we can see that if two sets have the same cardinality, then they are logically indiscernable. Since we accept the inference from having the same cardinality to having the same size, this partially confirms our claim.

But what about the full claim? If two sets have the same size, how can they be distinguished logically? There must be some logically relevant feature of the set which is distinguishing them, but has nothing to do with the size. But what could that possibly be? Surely size tells us everything we can know about a set without looking at the particular characteristics of  its elements (i.e. its non-logical properties.) If there is any natural notion of size at all, it must surely involve logical indiscernability.

The interesting thing is that if we have the principle that sameness in size entails logical indiscernability we get CP in full. The logical properties over the first layer of sets of the urelemente are just those sets invariant under all permutations of the urelemente. Logical properties of these sets are just unions of collections sets of the same size. Thus logically indiscernable sets are just sets with the same cardinality!

Ignore sets for a moment. The usual setting for permutation invariance tests is on the quantifiers. A variant of the above argument can be given. This time we assume that size quantifiers are maximally specific logical quantifiers. There are two ways of spelling this out, both of which will do:

  • For every logical quantifier, Q, Sx\phi \models Qx\phi or Sx\phi \models \neg Qx\phi
  • For every logical quantifier, Q, if Qx\phi \models Sx\phi then Qx\phi \equiv Sx\phi

The justification is exactly the same as before: the size of the \phi‘s tells us everything we can possibly know about the \phi‘s without looking at the particular characteristics of the individuals phi‘s – without looking at their non-logical properties. Since the cardinality quantifiers have this property too, we can show that every size quantifier is logically equivalent to some cardinality quantifier and vice versa.

I take this to be a strong reason to think that cardinality is the only natural notion of size on sets. That said, there’s still the possibility that the ordinary notion of size is simply underdetermined when it comes to infinite sets. Perhaps our linguistic practices do not determine a unique extension for expressions like ‘X is the same size as Y’ for certain X and Y. One thing to note is that the indeterminacy view seems to be motivated by our wavering intuitions about sizes. But as we saw earlier, a lot of these intuitions turn out to be inconsistent, so there won’t even exist precisifications of ‘size’ corresponding to these intuitions. On the other hand, if we are to think of the size of a set as the most specific thing we can say about that set, without appealing to the particular properties of its members, then there is a reason to think this uniquely picks out the cardinality precisification.

h1

The Sorites paradox and non-standard models of arithmetic

December 16, 2008

A standard Sorites paradox might run as follows:

  • 1 is small.
  • For every n, if n is small then n+1 is small.
  • There are non-small numbers.

On the face of it, these three principles are inconsistent, since the first two premisses entail that every number is small by the principle of induction. As far as I know, there is no theory of vagueness that gives us that these three sentences are true (and none of them false.) Nonetheless, it would be desirable if these sentences could be satisfied.

The principle of induction seems to do fine when we are dealing with precise concepts. Thus the induction schema for PA is fine, since it only says that it holds for properties definable in arithmetical vocabulary – all of which is precise. However, if we read the induction schema as open ended, that is, to hold even if we were to extend the language with new vocabulary, it is false. For it fails when we introduce into the language vague predicates.

The induction schema is usually proved by appealing to the fact that the naturals are well-ordered: every subset of the naturals has a least element. If the induction schema is going to fail if we allow vague sets, so should the well ordering principle. And that seems right: the set of large numbers doesn’t appear to have a least element – there is no first large number. So we have:

  • The set of large numbers has no smallest member.

Again no theory I know of delivers this verdict. The best we get is with non classical logics, where it is at best vague whether there exists a least element of the set of large numbers.

Finally, I think we should also hold the following:

  • For any particular number, n, you cannot assert that n is large.

That is, to assert of a given number, n, that it is large is to invite the Sorites paradox. You may assert that there exist large numbers, its just you can’t say exactly which they are. To assert that n is large, is to commit yourself to an inconsistency by standard Sorites reasoning, from n-1 true conditionals and the fact that 0 is not large.

The proposal I want to consider verifies all three of the bulletted points above. As it turns out, given a background of PA, the initial trio isn’t inconsistent after all. It’s merely \omegainconsistent (given we’re not assuming open ended induction.) But this doesn’t strike me as a bad thing in the context of vagueness, since after all, you can go through each of the natural numbers and convince me its not large by Sorites reasoning, but that shouldn’t shake my belief that there are large numbers.

\omega-inconsistent theories are formally consistent with the PA axioms, and thus have models by Gödel’s completeness theorem. These are called non-standard models of arithmetic. They basically have all the sets of naturals the ordinary natural numbers have, except they admit more subsets of the naturals – they admit vague sets of natural numbers as well as the old precise sets. Intuitively this is right – when we only had precise sets we got into all sorts of trouble. We couldn’t even talk about the set of large numbers because it didn’t exist; it was a vague set.

What is interesting is that some of these new sets of natural numbers don’t have smallest members. In fact, the set of all non-standard elements is one of these sets, but there are many others. So my suggestion here is that the set of large numbers is one of these non-standard sets of naturals.

Finally, we don’t want to be able to assert that n is large, for any given n, since that would lead us to true contradiction (via a long series of conditionals.) The idea is we may assert that there are large numbers out there, but we just cannot say which ones. On first glance this might seem incoherent, however, it is just another case of \omega-inconsistency. \{\neg Ln \mid n a numeral \} \cup \{\exists x Lx\} is formally consistent. For example, this is satisfied in any non-standard model of PA with L interpreted as the set of non-standard elements.

How to make sense of all this? Well, the first thing to bear in mind is that the non-standard models of arithmetic are not to be taken too seriously. They show that the view in question is consistent, and are also a good guide to seeing what sentences are in fact true. For example in a non-standard model the second order universally quantified induction axiom is false, since the second order quantifiers range over vague sets, however the induction schema is true, provided it only allows instances of properties definable in the language of arithmetic (this is how the schema is usually stated) since those instances define only precise sets. We should not think of the non-standard models as accurate guides to reality, however, since they are constructed from purely precise sets, of the kind ZFC deals with. For example, the set of non-standard elements is a precise set being used to model a vague set. Furthermore, the non-standard models are described as having an initial segment which are the “real” natural numbers, and then a block of non-standard naturals coming after them. The intended model of our theory shouldn’t have these extra elements, it should have the same numbers, just with more sets of numbers, vague and precise ones.

Another question is, which non-standard model makes the right (second order) sentences true? Since there are only countably many naturals, we can add a second order sentence stating this to our theory (we’d have to check it still means the same thing once the quantifiers range over vague sets as well.) This would force the model to be countable. Call the first order sentences true in the standard model plus the second order sentence saying the universe is countable, plus the statements: (i) 0 is small, (ii) for every n, if in is small, n+1 is small and (iii) there are non small numbers, T. T is still consistent (by the Lowenheim-Skolem theorem), and I think this will uniquely pick out our model as \mathbb{N} + \mathbb{Q} by a result from Skolem (I can’t quite remember the result right now, but maybe someone can correct me if its wrong.) This only gives us the interpretation for the second order quantifiers and the arithmetic vocabulary, obviously it won’t tell us how to interpret the vague vocabulary.

h1

Composition as identity, part II

December 11, 2008

Aside from Leibniz’s law, there are various other constraints identity must obey. For example, every object is identical with at most one thing, so every object presumably is identical* to at most one plurality. But here we have a disanalogy with the relation “x is the fusion of the yy’s”, for x is the fusion of many pluralities. If there may be more than one plurality *identical to x, then our notation for pluralizing x, x*, isn’t justified: * isn’t a function.

A way of sharpening this problem, pointed out by Jeff in the comments, is that you’d want (*)identity(*) to be transitive. For example, my upper and lower body are *identical to me, and I’m identical* to my arms, legs head and torso. Does that mean my lower and upper body are *identical* to my arms, legs, head and torso? How do you state that they’re different pluralities?

So there appear to be two ways you can go. One is to say that every object is really identical* to only one plurality, and the other is to say that every plurality is *identical to exactly one object. I’ll call the two approaches (a) unique decomposition, and (b) unique composition.

The first seems to be the most natural. By way of analogy, note that the pluralities, over a domain of objects, have many of the formal properties of mereology. (i) there’s no null fusion/no empty plurality, (ii) pluralities are closed under ‘unioning’ and ‘non-empty intersecting’ (fusion and products) (iii) they’re closed under complements (supplementation.)

In fact, they form a complete Boolean algebra under ‘subplurality’, and thus model the standard mereological axioms. However, there are some drawbacks. Firstly, you can form pluralities of mereological objects, in standard mereological theories (that’s how unrestricted composition is usually stated.) However, on this picture, you can’t, for it would amount to forming pluralities of pluralities – which is nonsense.

You might think this is not too much of a cost; after all, you can always talk about superpluralities when the standard mereologist talks of pluralities.

So this seems to answer our original problem, which was to ensure that many-one identity really associated each object with one plurality. What we have is unique decomposition: there is a unique plurality associated with each object, and that plurality fuses to that object (is *identical to it.) The way we have achieved unique decomposition in this case is by identifying xx with x’s atoms.

There may be other ways to achieve unique decomposition, but it seems they’ll all fall to the following problem. There are some situations where unique decomposition can’t be achieved, at least according to the standard mereologist. One of these is the possibility of a gunky world: a world where everything has a proper part. Formally, we have an atomless Boolean algebra. But if the points in our algebra are pluralities, what are they pluralities of? There cannot be any singleton pluralities, and if there can’t be singleton pluralities, there can’t be objects for there to be pluralities of.

[Side note: by Stone’s representation theorem, any gunky world a standard mereologist can conjure up may be represented by a ‘Henkin’ model of plural logic. Thus, you may feel like you’re in a gunky world – but only because your plural quantifiers are restricted. You’re failing to quantify over all pluralities (in particular, the singletons.)]

The other approach was what I labelled unique composition. Every plurality is *identical to exactly one object. In particular, the tables legs and surface are *identical to exactly one object, x, and the tables atoms are *identical to exactly one object, y. Since they two pluralities aren’t *identical*, neither is x and y. But now we should be worried: this seems to mean we must be able to uniquely assign one object to every plurality in the domain. Since we already have a condition for plurality identity, namely xx ^*\!\!=^* yy \leftrightarrow \forall z(z\prec xx \leftrightarrow z\prec yy) we get the following:

  • \forall x\forall y(x=y \leftrightarrow  \forall z(z\prec xx \leftrightarrow z\prec yy))

This is essentially Frege’s infamous Basic Law V, which entails that there is exactly one object. (Actually, in Frege’s logic it entailed a contradiction, but he allowed there to be empty pluralities.)

In Frege’s system you could derive this, via Russell’s paradox, and I (probably) haven’t written out enough axioms for you to be able to derive the paradox formally. But the problem is still there in the form of Cantor’s theorem, which says you cannot uniquely assign an object to each plurality.

(Note: I never said this, but \forall x can bind xx and vice versa.)

h1

Composition as identity, part I

December 11, 2008

I’ve been thinking a bit about the (somewhat radical) thesis that an object is literally identical with its parts. So, for example, these things, my parts, are identical to me. One nice thing about this is that you seem to get unrestricted composition for free: you get it from the plural comprehension schema.

However, its main drawback is it requires you to be able to make sense of many one identity. Lewis notes one problem with this, namely: my parts are many, whereas I am not. There are a couple of responses out there: Baxter takes this to be a failure of Leibniz’s law, and Sider has a language where plurals and and singular terms are intersubstituteable. Predicates are polymorphic and you can say truly that I’m both one, and many.

Both these views have crazy consequences (see Sider’s paper “Parthood” to see why.) So I’ve been trying to come up with a more natural way for the composition as identity theorist to go.

Note firstly that Alice, Bob and Fred are human iff Alice is human, Bob is human, and Fred is human. ‘Human’ is a distributive property. Consequently, the atoms that compose me are human iff each atom individually is human. They’re not, so the atoms that compose me aren’t human. However, there is a non-distributive property my atoms have, being human*, which some things have, roughly, if they compose a human. Thus I am human iff my atoms are human*.

So that’s the first step: every monadic predicate of the language, F, has a pluralised homonym, F*. For example, ‘one*’ is short for ‘many’: I am one, the atoms that compose me are one* (they’re many.) The second step: for every singular variable (or name), x,  there is a pluralised version, x*. I shall follow the tradition in plural logic, and use xx for x*. So, for example, ‘Andrew*’ is short for ‘Andrew’s parts’. Finally identity. We have one-one identity, =, many-one identity, *=, one-many identity, =*, and many-many identity, *=*. For n-place relations, well, you can work out your own notation, but it’s the same idea as identity.

We are now in a position to state Leibniz’s law. There are actually lots of versions, I’ll just state a couple

  • \phi^*(xx), xx ^*\!\!=y \vdash \phi(y)
  • \phi(x), x=^*yy \vdash \phi^*(yy)

(you must also add suitable identity axioms such as x =^* xx, xx ^*\!\!=x, etc…). So, for example, Fred is one, Fred is Fred’s parts (that is, Fred =* Fred*), therefore Fred’s parts are one* (Fred* are one*.) So, Fred’s parts are many. I’m human, I’m my parts, so my parts are human*. That’s the idea.

So much for identity. How do we get mereology out of this? Define x is a part of y, iff the xx’s are among the yy’s. Supposing \sqsubseteq is parthood, we have the following definition

  • xx ^*\!\!\sqsubseteq^* yy \leftrightarrow \forall z(z \prec xx \rightarrow z \prec yy)

where \prec is the ‘is one of’ relation from plural logic. Thus ^*\!\!\sqsubseteq^* is defineable in purely logical vocabulary, so if \sqsubseteq is truly a homonym parthood is logical. What’s more, unrestricted composition falls out from plural comprehension as desired.

But the other good thing about this formulation is that it avoids some of the crazy consequences Sider claims they get. For example, allegedly the principle: x is one of y_1, \ldots, y_n iff x=y_1 or … or x=y_n, fails. But his argument required moving between (in my language) ‘x is part of y’ and ‘x is part* of y’, rather than ‘x is a part* of yy’. Similarly he had to move between ‘x is-one-of xx’ and ‘x *is-one-of yy’ rather than ‘xx *is-one-of yy’ (his argument is just ungrammatical in this framework.)

Similarly, because he doesn’t pay attention to the difference between parthood, parthood*, *parthood and *parthood*, he gets all kinds of weird things coming out, e.g. ‘Tom, Dick and Harry carried the basket’ iff ‘Dom, Hick and Tarry carried the basket’, where Dom is the fusion of Dicks head and Toms body, Hick the fusion of Harry’s head and Dick’s bady, and Tarry the fusion of Toms head and Harry’s body. Following in the spirit of my rules, you can get from the LHS to ‘Tom*, Dick* and Harry* (carried the bucket)*’, where ‘(carried the bucket)*’ is a superplural predicate. But you can’t then swap bits from the plural terms ‘Tom*’, ‘Dick*’ and ‘Harry*’ and expect it to still satisfy (carried the basket)*.

Lastly, a predicate, P, is distributive iff P(x_1, \ldots x_n) \Leftrightarrow P(x_1) \wedge \ldots \wedge P(x_n). Sider claims there are no distributive predicates if you’re a composition as identity theorist. But again, the argument seems to rely on being able to freely move between plural and singular terms, without moving between the corresponding plural and singular predicates.

Ok, so it seems to be a natural way to formulate the position. That said, I think the position is ultimately incoherent, so I’ll talk a bit about that in the next post…

h1

Supertask decision making

December 2, 2008

I have a little paper writing up the supertask puzzle I posted recently. I’ve added a second puzzle that demonstrates the same problem, but doesn’t use the axiom of choice (it’s basically just a version of Yablo’s paradox), and I’ve framed the puzzles in terms of failures of the deontic Barcan formulae.

Anyway – if anyone has any comments, I’d be very grateful to hear them!