B entails that a conjunction of determinate truths is determinate

October 26, 2010

I know it’s been quiet for a while around here. I have finally finished a paper on higher order vagueness which has been  a long time coming, and since I expect it to be in review for quite a while longer I decided to put it online. (Note: I’ll come back to the title of this post in a bit, after I’ve filled in some of the background.)

The paper is concerned with a number of arguments that purport to show that it is always a precise matter whether something is determinate at every finite order. This would entail, for example, that it was always a precise matter whether someone was determinately a child at every order, and thus, presumably, that this is also a knowable matter. But it seems just as bad to be able to know things like “I stopped being a determinate child at every order after 123098309851248 nanoseconds from my birth” as to know the corresponding kinds of things about being a child.

What could the premisses be that give such a paradoxical conclusion? One of the principles, distributivity, says that a (possibly infinite) conjunction of determinate truths is determinate, the other, B, says p \rightarrow \Delta\neg\Delta\neg p. If \Delta^* p is the conjunction of p, \Delta p, \Delta\Delta p, and so on, distributivity easily gives us (1) \Delta^*p \rightarrow\Delta\Delta^* p. Given a logic of K for determinacy we quickly get \Delta\neg\Delta\Delta^*p \rightarrow\Delta\neg\Delta^* p, which combined with \neg\Delta^* p\rightarrow \Delta\neg\Delta\Delta^* p (an instance of B) gives (2) \neg\Delta^* p\rightarrow\Delta\neg\Delta^* p. Excluded middle and (1) and (2) gives us \Delta\Delta^* p \vee \Delta\neg\Delta^* p, which is the bad conclusion.

In the paper I argue that B is the culprit.* The main moving part in Field’s solution to this problem, by contrast, is the rejection of distributivity. I think I finally have a conclusive argument that it is B that is responsible, and that is that B actually *entails* distributivity! In other words, no matter how you block the paradox you’ve got to deny B.

I think this is quite surprising and the argument is quite cute, so I’ve written it up in a note. I’ve put it in a pdf rather than post it up here, but it’s only two pages and the argument is actually only a few lines. Comments would be very welcome.

* Actually a whole chain of principles weaker than B can cause problems, the weakest which I consider being \Delta(p\rightarrow\Delta p)\rightarrow(\neg p \rightarrow \Delta\neg p), which corresponds to the frame condition: if x can see y, there is a finite chain of steps from y back to x each step of which x can see.


Interpreting the third truth value in Kripke’s theory of truth

March 28, 2010

Notoriously, there are many different theories of untyped truth which use Kripke’s fixed point construction in one way or another as their mathematical basis. The core result is that one can assign every sentence of a semantically closed language one of three truth values in a way that \phi and Tr(\ulcorner\phi\urcorner) receive the same value.

However, how one interprets these values, how they relate to valid reasoning and how they relate to assertability is left open. There are classical interpretations in which assertability goes by truth in the classical model which assigns Tr the positive extension of the fixed point, and consequence is classical (Feferman’s theory KF.) There are paraconsistent interpretations in which the middle value is thought of as “true and false”, and assertability and validity go by truth and preservation of truth. There’s also the paracomplete theory where the middle value is understood as neither true nor false and assertability and validity defined as in the paraconsistent case. Finally, you can mix these views as Tim Maudlin does – for Maudlin assertability is classical but validity is the same as the paracomplete interpretation.

In this post I want to think a bit more about the paracomplete interpretations of the third truth value. A popular view, which originated from Kripke himself, is that the third truth value is not really a truth value at all. For a sentenc to have that value is simply for the sentence to be ‘undefined’ (I’ll use ‘truth status’ instead of ‘truth value’ from now on.) Undefined sentences don’t even express a proposition – something bad happens before we can even get to the stage of assigning a truth value. It simply doesn’t make sense to ask what the world would have to be like for a sentence to ‘halfly’ hold.

This view seems to a have a number of problems. The most damning, I think, is the theory’s inability to state this explanation of the third truth status. For example, we can state what it is to fail to express a proposition in the language containing the truth predicate: a sentence has truth value 1 if it’s true, has truth value 0 if it’s negation is true, and it has truth status 1/2, i.e. doesn’t express a proposition, if neither it nor its negation is true.

In particular, we have the resources to say that the liar sentence does not express a proposition: \neg Tr(\ulcorner\phi\urcorner)\wedge\neg Tr(\ulcorner\neg\phi\urcorner). However, since both conjuncts of this sentence don’t express propositions, the whole sentence,  the sentence ‘the liar does not express a proposition’, does not itself express a proposition either! Furthermore, the sentence immediately before this one doesn’t express a proposition either (and neither does this one.) It is never possible to say a sentence doesn’t express a proposition unless you’ve either failed to express a proposition, or you’ve expressed a false proposition. What’s more, we can’t state the fixed point property: we can’t say that the liar sentence has the same truth status as the sentence that says the liar is true since that won’t express a proposition either: the instance of the T-schema for the liar sentence fails to express a proposition.

The ‘no proposition’ interpretation of the third truth value is inexpressible: if you try to describe the view you fail to express anything.

Another interpretation rejects the third value altogether. This interpretation is described in Fields book, but I think it originates with Parsons. The model for assertion and denial is this: assert just the things that get value 1 in the fixed point construction and reject the rest. Thus the sentences  “some sentences are neither true nor false”, “some sentences do not express a proposition” should be rejected as they come out with value 1/2 in the minimal fixed point. As Field points out, though, this view is also expressively limited – you don’t have the resources to say what’s wrong with the liar sentence. Unlike in the previous case where you did have those resources, but you always failed to express anything with them, in this case being neither true nor false is not what’s wrong with the liar since we reject that the liar is neither true nor false. (Although Field points out that while you can classify problematic sentences in terms of rejection, you can’t classify contingent liars where you’d need to say things like ‘if such and such were the case, then s would be problematic’ since this requires an embeddable operator of some sort.)

I want to suggest a third interpretation. The basic idea is that, unlike the second interpretation, there is a sense in which we can communicate that there is a third truth status, and unlike the first, 1/2 is a truth value, in the sense that sentences with that status express propositions and those propositions “1/2-obtain” – if the world is in this state I’ll say the proposition obtails.

In particular, there are three ways the world can be with respect to a proposition: things can be such that the proposition obtains, such it fails, and such that it obtails.

What happens if you find out a sentence has truth status 1/2 (i.e. you find out it expresses a proposition that obtails)? Should you refrain from adopting any doxastic attitude, say, by remaining agnostic? I claim not – agnosticism comes about when you’re unsure about the truthvalue of a sentence, but in this case you know the truth value. However it is clear you should neither accept nor reject it either – these are reserved for propositions that obtain and fail respectively. It seems most natural on this view to introduce a third doxastic attitude: I’ll call it receptance. When you find out a sentence has truth value 1 you accept, when you find out is has value 0 you reject and when you find out it has value 1/2 you recept. If haven’t found out the truth value yet you should withold all three doxastic attitudes and remain agnostic.

How do you communicate to someone that that the liar has value 1/2? Given that the sentences which says the liar has value 1/2 also has value 1/2, you should not assert that the liar has value 1/2. You assert things in the hopes that your audience will accept them, and this clearly not what you want if the thing you want to communicate has value 1/2. Similarly you deny things in the hope that your audience will reject them. Thus this view calls for a completely new kind of speech act, which I’ll call “absertion”, that is distinct from the speech acts of assertion and denial. In a bivalent setting the goal of communication is to make your audience accept true things and reject false things, and once you’ve achieved that your job is done. However, in the trivalent setting there is more to the picture: you also want your audience to recept things that have value 1/2, which can’t be achieved by asserting them or denying them. The purpose of communication is to induce *correct* doxastic state in your audience, where a doxastic state of acceptance, rejection or receptance in s is correct iff s has value 1, 0 or 1/2 respectively. If you instead absert sentences like the liar, and your audience believes you’re being cooperative, they will adopt the correct doxastic attitude of reception.

This, I claim, all follows quite naturally from our reading of 1/2 as a third truth value. The important question is: how does this help us with the expressive problems encountered earlier? The idea is that in this setting we can *correctly* communicate our theory of truth using the speech acts of assertion, denial and absertion, and we can have correct beliefs about the world by also recepting some sentences as well as accepting and rejecting others. The problem with the earlier interpretations was that we could not correctly communicate the idea that the liar has value 1/2 because it was taken for granted that to correctly communicate this to someone involved making them accept it. On this interpretation, however, to correctly express the view requires only that you absert the sentences which have value 1/2. Of course any sentence that says of another sentence that it has value 1/2 has value 1/2 itself, so you must absert, not assert, those too. But this is all to be expected when the obective of expressing your theory is to communicate it correctly, and that communicating correctly involves more that just asserting truthfully.

Assertion in this theory behaves much like it does in the paracomplete theory that Field describes, however some of the things Field suggests we should reject we should absert instead (such as the liar.) To get the idea, let me absert some rules concerning absertion:

  • You can absert the liar, and you can absert that the liar has value 1/2.
  • You can absert that every sentence has value 1, 0 or 1/2.
  • You ought to absert any instance of a classical law.
  • Permissable absertion is not closed under modus ponens.
  • If you can permissibly absert p, you can permissibly absert that you can permissibly absert p.
  • If you can absert p, then you can’t assert or deny p.
  • None of these rules are assertable or deniable.

(One other contrast between this view and the no-proposition view is that it sits naturally with a more truth functionally expressive logic. The no-proposition view is often motivated by the motivation for the Kleene truth functions: a three valued function that behaves like a particular two valued truth function on two valued inputs, and has value 1/2 when the corresponding two valued function could have had both 1 or 0 depending on how one replaced 1/2 in the three valued input with 1 or 0. \neg, \vee is expressively adequate with respect to Kleene truth functions defined as before. However, Kripke’s construction works with any monotonic truth function (monotonic in the ordering that puts 1/2 and the bottom and 1 and 0 above it but incomparable to each other) and \neg, \vee are not expressively complete w.r.t the monotonic truth functions. There are monotonic truth functions that aren’t Kleene truth functions, such as “squadge”, that puts 1/2 everywhere that Kleene conjunction and disjunction disagree, and puts the value they agree on elsewhere. Squadge, negation and disjunction are expressively complete w.r.t monotonic truth functions.)


Is ZFC Arithmetically Sound?

February 12, 2010

I recently stumbled across this fascinating discussion on FOM. The question at stake: why should we believe that ZFC doesn’t prove any false statements about numbers? That is, while of course we should believe ZFC is consistent and \omega-consistent, that is no reason to expect it not to prove false things: perhaps even false things about numbers that we could, in some sense, verify.

Of course – the “in some sense” is important, as Harvey Friedman stressed in one of the later posts. After all ZFC can prove everything PA can, so whatever the false consequences of ZFC are, we couldn’t prove them from PA. There were a number of interesting suggestions. For example it might prove the negation of something we have lots of evidence for (e.g. something like Goldbach’s conjecture where we have verified lots of its instances – except unlike GC it can’t be \Pi^0_1.) Or perhaps it would prove there was some Turing machine that would halt, but which never would if we were to make it. There’s a clear sense that it’s false that the TM halts, even though we can’t verify it conclusively.

Anyway, while reading all this I became quite a lot less sure about some things I used to be pretty certain about. In particular, a view I had never thought worth serious consideration: the view that there isn’t a determinate notion of being ‘arithmetically sound’. Or more transparently, the view that there’s no such thing as *the* standard model of arithmetic, i.e. there are lots of equally good candidate structures for the natural numbers, and that there’s no determinate notion of true and false for arithmetical statements. Now I have given it fair consideration I’m actually beginning to be swayed by it. (Note: this is not to say I don’t think there’s a matter of fact about statements concerning certain physical things like the ordering of yearly events in time, or whether a physical Turing machine will eventually halt. It’s just I think this could turn out to be contingent. It’ll depend, I’m guessing, on the structure of time and the structure of space in which the machine tape is embedded. Thus, on this view, arithmetic is like geometry – there is no determinate notion of true-for-geometry, but there is is a determinate notion of true of the geometry of our spacetime, which actually turns out to be a weird geometry.)

Something that would greatly increase my credence in this view would be if we could find a pair of “mysterious axioms”, (MA1) and (MA2), which had the following properties. (a) they are like the continuum hypothesis, (CH), in that they are independent of our currently accepted set theory, say ZFC plus large cardinals, and, like (CH), it is unclear how things would have to be for it to be true or false. (b) unlike (CH) and its negation, (MA1) and (MA2) its negation disagree about some arithmetical statement.

Let me first say a bit more about (a). On some days of the week I doubt there are any sets, or that there are as many things as there would need to be for there to be sets. However I believe in plural quantification, and believe that if there *were* enough things, then we could generate models for ZFC just by considering pluralities of ordered pairs. But even given all that I don’t think I know what things would have to be like for (CH) to be true. If there is a plurality of ordered pairs that satisfies ZF(C), then there is one that satisfies ZFC+CH, namely Gödel’s constructible universe, and also one that doesn’t satisfy CH. So even given we have all these objects, it is not clear which relation should represent membership between them. I can only think of two reasons to think there is a preferred relation: (1) if there were a perfectly natural relation, membership, between these objects which somehow set theorists are able latch onto and intuit things about from their armchair or (2) there is only one such relation (up to isomorphism anyway) compatible with the linguistic practices of set theorists. Neither of these seem particularly plausible to me.

Now let me say a bit about (b). Note firstly that Con(ZFC) is an arithmetical statement independent of ZFC. However it is not like (CH) in that we have good reason to believe its negation is false. And more to the point, its negation is inconsistent with there being any inaccessibles. (MA) is going to have to be subtler than that.

It is also instructive to consider the following argument that ZFC *is* arithmetically sound. Suppose it’s determinate that there’s an inaccessible (a reasonable assumption, if we grant there are enough things, and that the truth of these claims are partially fixed by the practices of set theorists.) Let \kappa be the first one. Then V_\kappa is a model for ZFC which models every true arithmetical statement (because the natural numbers are an initial segment of \kappa [edit: and arithmetical statements are absolute].) So ZFC cannot prove any false arithmetical statement. That is, determinately, ZFC is arithmetically sound. And all we’ve assumed is that it’s determinate that there’s an inaccessible.

Now I find this argument convincing. But clearly this doesn’t prove that every arithmetic statement is determinate. All it shows is that arithmetic is determinate if ZFC is. But (CH) has already brought the antecedent into doubt! So although V_\kappa determinately decides every arithmetical statement correctly, it is still indeterminate what V_\kappa makes true. That is, both (MA1) and (MA2) disagree not only over some arithmetical statement, but also over whether V_\kappa makes that statement true.

Now maybe there isn’t anything like (MA1/2). Maybe we will always be able to find a clear reason to accept or reject any set theoretic statement that has consequences for arithmetic. But I see absolutely no good reason to think that there won’t be anything like (MA1/2). To make it more vivid, there are these really really weird results from Harvey Friedman showing that simple combinatorial principles about numbers imply all kinds of immensely strong things about large cardinals. While these simple principles about numbers look determinate they imply highly sophisticated principles that are independent of ZFC. I see no reason why someone might not find a simple number theoretic principle that implies another continuum hypothesis type statement. And in the absence of face value platonism – *a lot* of objects, and a uniquely preferred membership (perhaps natural) relation between them – it is hard to think how these statements could be determinate.


Truth as an operator and as a predicate

November 5, 2009

Suppose we add to the propositional calculus a new unary operator, T, whose truth table is just the trivial one that leaves the truth value of its operand untouched. By adding

  • (Tp \leftrightarrow p)

to a standard axiomatization of the propositional calculus we completely fix the meaning of T. Moreover this is a consistent classical account of truth that gives us a kind of unrestricted “T-schema” for the truth operator.

On the face of it, then, it seems that if we treat truth as an operator operating on sentences rather than a predicate applying to names of sentences we somehow avoid the semantic paradoxes. But this seems almost like magic: both ways of talking about truth supposed to be expressing the same property – how could a grammatical difference in their formulation be the true source of the paradox?

My gut feeling is that there isn’t anything particularly deep about the consistency of the operator theory of truth: it just boils down to an accidental grammatical fact about the kinds of languages we usually speak. The grammatical fact is this. One can have syntactically simple expressions of type e but not of type t. Without the type theory jargon this just means we can have names that can be the argument of a predicate but not “names” that can be the argument of an operator. Call these latter kind of expressions “name*s”. If p is a name* then \neg p is grammatically well formed and is evaluated as the same as \neg \phi where \phi is whatever sentence p refers* to. If pick p so that it refers* to “\neg p” then we are in just the same predicament we were in the case where we were considering names and treating truth like a predicate. One could simply pick a constant and stipulate that it refers to the sentence “~Tr(c)”.

We could make this a little more precise. By restricting our attention to languages without name*s we’re remaining silent about propositions that we could have expressed if we removed the restriction. Indeed, there is a natural translation between operator talk (in the propositional language with truth described at the beginning) and predicate talk. So, on the looks of it, it seems we could make exactly the same move in the predicate case: accept only sentences that are translations of sentences we accept. The natural translation I’m referring to is this:

  • p^* \mapsto p
  • (\phi \wedge \psi)^* \mapsto (\phi^*\wedge\psi^*)
  • (\neg \phi)^* \mapsto \neg \phi^*
  • (T\phi)^* \mapsto Tr(\ulcorner\phi^*\urcorner)

Here’s a neat little fact which is quite easy to prove. Let M be a model of the propositional calculus (a truth value assignment.)

Theorem. \phi is the translation a true formula in M if and only if \phi appears in Kripke’s minimal fixedpoint construction using the weak Kleene valuation with ground model M.

Note that, because we don’t have quantifiers, the construction tapers out at \omega so we can prove the right-left direction by induction over the finite initial stages of the construction. Left-right is an induction over formula complexity.

If the rule is to simply reject all sentences which aren’t translations of an operator sentence then it appears that the neat classical operator view is really just the well known non-classical view based on the weak Kleene valuation scheme. It is well known that the latter only appears to be classical when we restrict attention to grounded formulae; it seems the appearance is just as shallow for the former view.

Incidentally, note that there’s no natural way to extend this result to languages with quantifiers. This is because there’s no “natural” translation between the propositional calculus with propositional quantifiers and a quantified language with the truth predicate capable of talking about its own syntax.


Rigid Designation

October 23, 2009

Imagine the following set up. There are two tribes, A and B, who up until now have never met. It turns out that tribe A speaks English as we speak it now. However, tribe B speaks English* – a language much like English except it doesn’t contain the names “Aristotle” or “Plato”, and contains two new names, “Fred” and “Ned”.

Suppose now that these two tribes eventually meet and learn each others language. In particular tribe A and B come to agree that the following holds in the new expanded language: (1) necessarily, if Socrates was a philosopher, Fred was Aristotle and Ned was Plato, and (2) necessarily, if Socrates was never a philosopher, Fred was Plato and Ned was Aristotle.

Now we introduce to both tribes some philosophical vocabulary: we tell them what a possible world is, what it means for a name to designate something at a possible world. Both tribes think they understand the new vocabulary. We tell them a rigid designator is a term that designates the some object at every possible world.

Before meeting tribe B, tribe A will presumably agree with Kripke in saying that “Aristotle” and “Plato” are rigid designators, and after learning tribe B’s language will say that “Fred” and “Ned” are non-rigid (accidental) designators.

However tribe B will, presumably, say exactly the opposite. They’ll say that “Aristotle” is a weird and gruesome name that designates Fred in some worlds and Ned in others. Indeed whether “Aristotle” denotes Fred or Ned depends on whether Socrates is a philosopher or not, and, hence, tribe A are speaking a strange and unnatural language.

Who is speaking the most natural language is not the important question. My question is rather, how do we make sense of the notion of ‘rigid designation’ without having to assume English is privileged in some way over English*. And I’m beginning to think we can’t.

The reason, I think, is that the notion of rigid designation (and, incidentally, lots of other things philosophers of modality talk about) cannot be made sense of in the simple modal language of necessity and possibility – the language we start off with before we introduce possible worlds talk. However the answer to whether or not a name is a rigid designator makes no difference to our original language. For any set of true sentences in the simple modal language involving the name “Aristotle” I can produce you two possible worlds models that makes those sentences true: one that makes “Aristotle” denote the same individual in every world and the other which doesn’t.* If this is the case, how is the question of whether a name is a rigid designator ever substantive? Why do we need this distinction? (Note: Kripke’s arguments against descriptivism do not require the distinction. They can be formulated in pure necessity possibility talk.)

To put it another way, by extending our language to possible world/Kripke model talk we are able to postulate nonsense questions: Questions that didn’t exist in our original language but do in the extended language with the new technical vocabulary. An extreme example of such a question: is the denotation function a set of Kuratowski or Hausdorff ordered pairs? These are two different, but formally irrelevant, ways of constructing functions from sets. The question has a definite answer, depending on how we construct the model, but it is clearly an artifact of our model and corresponds to nothing in reality.

Another question which is well formed and has a definite answer in Kripke model talk: does the name ‘a’ denote the same object in w as in w’. There seems to be no way to ask this question in the original modal language. We can talk about ‘Fred’ necessarily denoting Fred, but we can’t make the interworld identity comparison. And as we’ve seen, it doesn’t make any difference to the basic modal language how we answer this question in the extended language.

[* These models will interpret names from a set of functions, S, from worlds to individuals at that world and quantification will also be cashed out in terms of the members of S. We may place the following constraint on S to get something equivalent to a Kripke model: for f, g \in S, if f(w) = g(w) for some w then f=g.

One might want to remove this constraint to model the language A and B speak once they've learned each others language. They will say things like: Fred is Aristotle, but they might have been different. (And if they accept existential generalization they'll also say there are things which are identical but might not have been!)]



October 19, 2009

A real post should be on the way soon. A few links in the meanwhile

  • If you haven’t seen it already there is a petition about allocating research funds on the basis of “impact” rather than academic merit. Please sign.
  • JC Beall points me to his new webpage. Lots of interesting looking papers.
  • I know it’s done the rounds already but this xkcd comic really made me chuckle!


August 19, 2009

I’ve been wondering just how much content there is to the claim that vagueness is truth on some but not all acceptable ways of making the language precise. It is well known that both epistemicists and supervaluationists accept this, so the claim is clearly not substantive enough to distinguish between *these* views. But does it even commit us to classical logic? Does it rule out *any* theory of vagueness.

If one allows quantification over non-classical interpretations it seems clear that this doesn’t impose much of a constraint. For example, if we include among our admissible interpretations Heyting algebras, or Lukasiewicz valuations, or what have you, it seems clear that we needn’t (determinately) have a classical logic. Similar points apply if one allowed non-classically described interpretations; interpretations that perhaps use bivalent classical semantics, but are constructed from sets for which membership may disobey excluded middle (e.g., the set of red things.)

In both cases we needn’t get classical logic. But this observation seems trite; and besides they’re not really ‘ways of making the language precise’. A precise non-bivalent interpretation is presumably one in which every atomic sentence letter receives value 1 or 0, thus making it coincide with a classical bivalent interpretation – and presumably no vaguely described precisification is a way of making the language completely precise either.

So a way of sharpening the claim I’m interested in goes as follows: vagueness is truth on some but not all admissible ways of making the language precise, where ‘a way of making the language precise’ (for a first-order language) is a Tarskian model constructed from crisp sets. A set X is crisp iff \forall x(\Delta x\in X \vee \Delta x \not\in X). This presumably entails \forall x(x\in X \vee x\not\in X) which is what crispness amounts to for a non-classical logician. An admissible precisification is defined as follows

  • v is correct iff the schema v \models \ulcorner \phi \urcorner \leftrightarrow \phi holds.
  • v is admissible iff it’s not determinately incorrect.

Intuitively, being correct means getting everything right – v is correct when truth-according-to-v obeys the T-schema. Being admissible means not getting anything determinately wrong – i.e., not being determinately incorrect. Clearly this is a constraint on a theory of vagueness, not an account. If it were an account of vagueness it would be patently circular as both ‘crisp’ and ‘admissible’ were defined in terms of ‘vague’.

Now that I’ve sharpened the claim, my question is: just how much of a constraint is this? As we noted, this seems to be something that every classicist can (and probably should) hold, whether they read \nabla as a kind of ignorance, semantic indecision, ontic indeterminacy, truth value gap, context sensitivity or as playing a particular normative role with respect to your credences, to name a few. Traditional accounts of supervaluationism don’t really say much about how we should read \nabla, so the claim that vagueness is truth on some but not all admissible precisifications doesn’t say very much at all.

But what is even worse is that even non-classical logicians have to endorse this claim. I’ll show this is so for the Lukasiewicz semantics but I’m pretty sure it will generalise to any sensible logic you’d care to devise. [Actually, for a technical reason, you have to show it's true for Lukasiewicz logic with rational constants. This is no big loss, since it's quite plausible that for every rational in [0,1] some sentence of English has that truth value: e.g. the sentences “x is red” for x ranging over shades in the spectrum between orange and red would do.]

Supposing that \Delta \phi \leftrightarrow \forall v(admissible(v) \rightarrow v \models \ulcorner \phi \urcorner) has semantic value 1, you can show, with a bit of calculation, that this requires that \delta \|\phi\| = inf_{v\not\models \phi}(\delta(1-inf_{v \models \psi}\|\psi\|)), where \delta is the function interpreting \Delta. Assuming that \delta is continuous this simplifies to: \delta\|\phi\| = \delta(1-sup_{v\not\models \phi}inf_{v\models \psi}\|\psi\|). Now since no matter what v is, so long as v \not\models \phi, we’re going to get that inf_{v \models \psi}\|\psi\| \leq \|\neg\phi\|, since v is classical (i.e. v \models \neg\phi.) But since we added all those rational constants the supremum of all these infs is going to be \|\neg\phi\| itself. So \|\phi\| = 1-sup_{v\not\models \phi}inf_{v\models \psi}\|\psi\| no matter what.

So if one assumes that \delta is continuous it follows that determinacy is truth in every admissible precisification (and that vagueness is truth in some but not all admissible precisifications.) The claim that \delta should be continuous amounts to the claim that a conjunction of determinate truths is determinate, which as I’ve argued before, cannot be denied unless one either denies that infinitary conjunction is precise or that vagueness is hereditary.


Get every new post delivered to your Inbox.