h1

Precisifications

August 19, 2009

I’ve been wondering just how much content there is to the claim that vagueness is truth on some but not all acceptable ways of making the language precise. It is well known that both epistemicists and supervaluationists accept this, so the claim is clearly not substantive enough to distinguish between *these* views. But does it even commit us to classical logic? Does it rule out *any* theory of vagueness.

If one allows quantification over non-classical interpretations it seems clear that this doesn’t impose much of a constraint. For example, if we include among our admissible interpretations Heyting algebras, or Lukasiewicz valuations, or what have you, it seems clear that we needn’t (determinately) have a classical logic. Similar points apply if one allowed non-classically described interpretations; interpretations that perhaps use bivalent classical semantics, but are constructed from sets for which membership may disobey excluded middle (e.g., the set of red things.)

In both cases we needn’t get classical logic. But this observation seems trite; and besides they’re not really ‘ways of making the language precise’. A precise non-bivalent interpretation is presumably one in which every atomic sentence letter receives value 1 or 0, thus making it coincide with a classical bivalent interpretation – and presumably no vaguely described precisification is a way of making the language completely precise either.

So a way of sharpening the claim I’m interested in goes as follows: vagueness is truth on some but not all admissible ways of making the language precise, where ‘a way of making the language precise’ (for a first-order language) is a Tarskian model constructed from crisp sets. A set X is crisp iff \forall x(\Delta x\in X \vee \Delta x \not\in X). This presumably entails \forall x(x\in X \vee x\not\in X) which is what crispness amounts to for a non-classical logician. An admissible precisification is defined as follows

  • v is correct iff the schema v \models \ulcorner \phi \urcorner \leftrightarrow \phi holds.
  • v is admissible iff it’s not determinately incorrect.

Intuitively, being correct means getting everything right – v is correct when truth-according-to-v obeys the T-schema. Being admissible means not getting anything determinately wrong – i.e., not being determinately incorrect. Clearly this is a constraint on a theory of vagueness, not an account. If it were an account of vagueness it would be patently circular as both ‘crisp’ and ‘admissible’ were defined in terms of ‘vague’.

Now that I’ve sharpened the claim, my question is: just how much of a constraint is this? As we noted, this seems to be something that every classicist can (and probably should) hold, whether they read \nabla as a kind of ignorance, semantic indecision, ontic indeterminacy, truth value gap, context sensitivity or as playing a particular normative role with respect to your credences, to name a few. Traditional accounts of supervaluationism don’t really say much about how we should read \nabla, so the claim that vagueness is truth on some but not all admissible precisifications doesn’t say very much at all.

But what is even worse is that even non-classical logicians have to endorse this claim. I’ll show this is so for the Lukasiewicz semantics but I’m pretty sure it will generalise to any sensible logic you’d care to devise. [Actually, for a technical reason, you have to show it’s true for Lukasiewicz logic with rational constants. This is no big loss, since it’s quite plausible that for every rational in [0,1] some sentence of English has that truth value: e.g. the sentences “x is red” for x ranging over shades in the spectrum between orange and red would do.]

Supposing that \Delta \phi \leftrightarrow \forall v(admissible(v) \rightarrow v \models \ulcorner \phi \urcorner) has semantic value 1, you can show, with a bit of calculation, that this requires that \delta \|\phi\| = inf_{v\not\models \phi}(\delta(1-inf_{v \models \psi}\|\psi\|)), where \delta is the function interpreting \Delta. Assuming that \delta is continuous this simplifies to: \delta\|\phi\| = \delta(1-sup_{v\not\models \phi}inf_{v\models \psi}\|\psi\|). Now since no matter what v is, so long as v \not\models \phi, we’re going to get that inf_{v \models \psi}\|\psi\| \leq \|\neg\phi\|, since v is classical (i.e. v \models \neg\phi.) But since we added all those rational constants the supremum of all these infs is going to be \|\neg\phi\| itself. So \|\phi\| = 1-sup_{v\not\models \phi}inf_{v\models \psi}\|\psi\| no matter what.

So if one assumes that \delta is continuous it follows that determinacy is truth in every admissible precisification (and that vagueness is truth in some but not all admissible precisifications.) The claim that \delta should be continuous amounts to the claim that a conjunction of determinate truths is determinate, which as I’ve argued before, cannot be denied unless one either denies that infinitary conjunction is precise or that vagueness is hereditary.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: