h1

Truth Functionality

May 4, 2009

I’ve been thinking a lot about giving intended models to non-classical logics recently, and this has got me very muddled about truth functionality.

Truth functionality seems like such a simple notion. An n-ary connective, \oplus, is truth functional just in case the truth value of \oplus(p_1, \ldots, p_n) depends only on the truth values of p_1, \ldots, p_n.

But cashing out what “depends” means here is harder than it sounds. Consider, for example, the following (familiar) connectives.

  • |\Box p| = T iff, necessarily, |p| = T.
  • |p \vee q| = T iff |p| = T or |q| = T.

Why, in the second example but not the first, does the truth value of \Box p depend on the truth value of p? They’ve both been given in terms of the truth value of p. It would be correct, but circular, to say that the truth value of \Box p doesn’t depend on the truth value of p, because it’s truth value isn’t definable from the truth value of p using only truth functional vocabulary in the metalanguage. But clearly this isn’t helpful – for we want to know what counts as truth functional vocabulary whether in the metalanguage or anywhere. For example, what distinguishes the first from the second example. To say that \vee is truth functional and \Box isn’t because “or” is truth functional and “necessarily” isn’t, is totally unhelpful.

Usually the circularity is better hidden than this. For example, you can talk about “assignments” of truth values to sentence letters, and say that if two assignments agree on the truth values of p_1, \ldots, p_n then they’ll agree on \oplus(p_1, \ldots, p_n). But what are “assignments” and what is “agreement”? One could simply stipulate that assignments are functions in extension (sets of ordered pairs) and that f and g agree on some sentences if f(p)=g(p) for each such sentence p.

But there must be more restrictions that this: presumably the assignment that assigns p and q F and p \vee q T is not an acceptable assignment. There are assignments which give the same truth values to p and q, but different truth values to p \vee q, making disjunction non truth functional. Thus we must restrict ourselves to acceptable assignments; assignments which preserve truth functionality of the truth functional connectives.

Secondly, there needs to be enough assigments. The talk of assignments is only ok if there is an assignment corresponding to the intended assignment of truth values to English sentences. I beleive that it’s vague whether p, just in case it’s vague whether “p” is true (this follows from the assertion that the T-schema is determinate.) Thus if there’s vagueness in our langauge, we had better admit assignments such that it can be vague whether f(p)=T. Thus the restriction to precise assignments is not in general OK. Similarly, if you think the T-schema is necessary, the restriction of assignments to functions in extension is not innocent either – e.g., if p is true but not necessary, we need an assignment such that f(p)=T and that possibly f(p)=F.

Let me take an example where I think it really matters. A non-classical logician, for concreteness take a proponent of Lukasiewicz logic, will typically think there are more truth functional connectives (of a given arity) than the classical logician. For example, our Lukasiewicz logician thinks that the conditional is not definable from negation and disjunction. (NOTE: I do not mean truth functional on the continuum of truth values [0, 1] – I mean on {T, F} in a metalanguage where it can be vague that f(p)=T.)) “How can this be?” you ask, surely we can just count the truth tables: there are 2^{2^n} truth functional n-ary connectives.

To see why it’s not so simple consider a simple example. We want to calculate the truth table of p \rightarrow q.

  • p \rightarrow q: reads T just in case the second column reads T, if the the first column does.
  • p \vee q: reads T just in case the first or the second column reads T.
  • \neg p: reads T if the first column doesn’t read T.

The classical logician claims that the truth table for p \rightarrow q should be the same as the truth table for \neg p \vee q. This is because she accepts the equivelance between the “the first column is T if the second is” and “the second column is T or the first isn’t” in the metalanguage. However the non-classical logician denies this – the truth values will differ in cases where it is vague what truth value the first and second columns read. For example, if it is vague whether both columns read T, but the second reads T if the second does (suppose the second column reads T iff 87 is small, and the second column reads T iff 88 is small), then the column for \rightarrow will determinately read T. But the statement that \neg p \vee q reads T will be equivalent to an instance of excluded middle in the metalanguage which fails. So it will be vague in that case whether it reads T.

The case that \rightarrow is truth functional for this non-classical logician seems to me pretty compelling. But why, then, can we not make exactly the same case for the truth functionality of \Box p? I see almost no disanalogy in the reasoning. Suppose I deny that negation and the truth operator are the only unary truth functional connectives, I claim \Box p is a further one. However, the only cases where negation and the truth operator come apart from necessity is when it is contingent what the first column of the truth table reads.

I expect there is some way of unentangling all of this, but I think, at least, that the standard explanations of truth functionality fail to do this.

Advertisements

6 comments

  1. Hi there,

    I need to think some more about the vague truth tables. But I do think that it’s not obvious that we need vague truth-tables, even given the disquotational view you describe.

    You say: “I beleive that it’s vague whether p, just in case it’s vague whether “p” is true (this follows from the assertion that the T-schema is determinate.) Thus if there’s vagueness in our langauge, we had better admit assignments such that it can be vague whether f(p)=T.”

    I don’t see we need to admit assignments such that it is vague what truth-values they assign. An alternative is to say that all truth-value assignments are precise, and when it’s vague whether “p” is true, it’s vague which truth-value value assignment is the designated one.

    In effect, we can resist the quantifier shift from: “it’s vague whether, on the intended interpretation f, f(p)=T” to “on the intended interpretation, it’s vague whether f(p)=T”. The wide-scope reading of the definite description “the intended interpretation” would force interpretations with vague truth values. The narrow scope reading allows vagueness in the description “the intended interpretation” to take the strain.

    This is the sort of thing I’d expect “non-standard supervaluationists” to say.

    So I’d be tempted to characterize truth-functionality in terms of whether every admissible assingment that agrees with the truth values assigned to components agrees with the truth values assigned to compounds. “Admissibility” is (non-circularly) cashed out in terms of preserving the semantic value of logical constants (of course, identifying the logical constants is a major task). And “agreement” can be understood in the obvious way, given that (arguably) we only need to work with fully precise truth-value assignments, and just say it’s vague which is the designated/intended one.

    I’m not sure what a non-classical logician who works with a vague metalanguage in describing truth-values or truth-tables should do. There’s some discussion of the difficulties she faces in Williamson’s Vagueness book, IIRC.


  2. Hi Robbie,

    That’s very interesting – I hadn’t thought about letting it be vague what the designated value is in non-supervaluationist contexts. I’ll reply here again when I’ve had time to think about it some more. (BTW, non-standard supervaluationism is the kind that keeps the T-schema right?)

    Just quickly – I’m still a bit worried about your proposal for defining truth functionality. “It’s expressible in English that p” is a truth functional in English, but presumably it’s not a logical constant. (It seemed like your definition was relying on the coincidence that most of the truth functional connectives happened to be logical too.)


  3. I was mostly thinking about supervaluation-style approaches with the “vague what the intended interpretation is” line—though the need to justify the quantifier-shift seems general.

    In the context of a degree theory, some people want to introduce a disquotational truth predicate—with T(‘p’) always taking the same value as p. (e.g. Nick JJ Smith does this at some point in his recent book, I think). Then whenever p is a middling degree of truth, true(‘p’) is middling degree of truth—meeting your requirement. Again, this doesn’t require vague assignments of degrees of truth.

    (The non-standard supervaluationist—yes, I was thinking about people who are keen on “penumbral truth” as the right analysis of truth, and so keep the T-schema. McGee/McLaughlin are one such. The view Williamson attacks at the end of the supervaluational chapter is in this vein.)

    I should think some more about the issue you raise above. One thing: in what setting do you want to formulate the puzzle? You’re using “necessarily” in the metalanguage to characterize box in the statement above. But that’s not terribly standard. And it’s not obviously right either—e.g. “water is h2O” might satisfy the LHS but not the RHS (I’m thinking of ||=T as “mapped by the intended interpretation to the true” where “the intended interpretation” is non-rigid). A standard approach would be to relativize to worlds, or something similar. “Box(p)” is true iff Necessarily, p —that might be ok, but as it stands it looks like a bit of a strange hybrid between T-theory and model theoretic approaches.

    What I’d like to say is that even if you treat “Necessarily” as a logical constant, the approach I sketch will rightly say that “and” is truth-functional, and “necessarily” isn’t. But to understand what it means to take “necessarily” to be a logical constant, we’d need some particular semantic approach in mind, I reckon. John MacFarlane has some interesting stuff in his dissertation on these issues (within a possible-worlds framework).

    I guess one project here is just to understand the truth-functional/non-truth-functional distinction as it applies to logical connectives (construed broadly to include e.g. modal operators). But it sounds like you want something more than that: to have something that classifies all English expressions in the right way. I need to think more about that. Have you looked at all on the literature on opaque vs. transparent contexts to see what sort of criteria Quine, Kaplan et al came up with? It seems a similar kind of issue.


  4. I’m not actually familiar with Smiths stuff. What is vagueness on his theory? Having intermediate truth value? (I suppose I shouldn’t be calling them truth values either, because semantic value 1 isn’t the same as truth. I’m not quite sure what work the degrees of “truth” would be doing for such a theorist.)

    Other than the determinacy of the T-schema I think there are other reasons to think there should be vague assignments of degrees of truth. For example, consider the metalinguistic Sorites:

    |1 is small| = 1

    If |n is small| = 1 then |n+1 is small| = 1

    |10^100 is small| = 1

    I find this Sorites as compelling as the standard one for “small” – and once you’ve admitted vagueness in assignments of degree of truth, it seems we haven’t really succeeded in characterizing vagueness as having “intermediate truth value” after all. If you’re going to allow vagueness in truth value assignments, why not just go for the bivalent semantics which allows vagueness in truth value. (I’m working on a paper on intended model theories for non-classical logics where these arguments are laid out more carefully, if you’re interested.)

    A possibly confusing thing about that example was the fact that it might not be truth functional in another language – I was thinking that (p \rightarrow Hesperus \not= Phospheros is also truth functional, but not, like negation, a logical constant.

    “A standard approach would be to relativize to worlds, or something similar. “Box(p)” is true iff Necessarily, p —that might be ok, but as it stands it looks like a bit of a strange hybrid between T-theory and model theoretic approaches.”

    I was thinking that you didn’t need to be a T-theorist to use a (partially) homophonic metalanguage (another difference is, I assume, that the T-theorist only gives an account of truth, not validity.) One of the problems with using a non-modal metalanguage is that you won’t have an intended model of quantified systems unless you quantified over non actual objects. I was thinking of the T-schema as necessary, (and truth as semantic value 1 – to rule out Smith) which means I shouldn’t get those counter examples. Of course, there are ways of reading “’S’ is true in English” that aren’t rigid (I was thinking along the lines of Kaplan, in “Words”.)

    “What I’d like to say is that even if you treat “Necessarily” as a logical constant, the approach I sketch will rightly say that “and” is truth-functional, and “necessarily” isn’t.”

    This rules out assignments that give “or” wacky interpretations. But what about the argument that you should rule *in* assignments that are vague or contingent in their assignment of truth values. I wasn’t so much concerned with defending the truth functionality of necessity. But the argument that there are more truth functional connectives in the case of non-classical logics seems compelling to me (if you can entertain this kind of non-classical logician.) At what point do you reject the parallel argument that necessity is truth functional. (So the non-classical logician will say agreement is: f(p)=T iff g(p)=T, where “iff” is the non-classical iff, not definable from disjunction and negation. It’s obvious that we should reject the analogous move where “iff” is interpreted as “necessarily p iff q”. But I’m worried the explanation might go along the lines of “iff” thus interpreted isn’t truth functional.)


  5. I’m trying to remember Williamson’s objections to doing model theory in a vague metalanguage. If I remember correctly he was directing it at the non-classical logicians who use many valued model theory.

    In the aforementioned paper, I’m giving (the standard) bivalent model theory for propositional \L, in a second order \L metalanguage (models are, possibly vague, collections of sentence letters.) The sentences that are true in every such model (determinately) are precisely those provable in \L. (Interestingly, it is sometimes vague if a sentence is true in every model!)


  6. Lots of stuff there! The paper sounds interesting.

    Just to pick up on the first point: the “degrees of truth” have various roles for Smith, other than defining truth (I’m not sure he commits himself to the disquotational notion being the *only* thing that has a claim to the title truth—it’s just that it’s there if we need to use it). One primary role is as an expert function for beliefs (or something similar)—basically your credences should match expected degree of truth. And they’re also used in defining the logic.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: