## Vagueness and Boundaries

April 2, 2008

Lately I’ve been trying to make sense of the idea that vague concepts are boundaryless. It’s very hard to make sense of this when the semantic value for a predicate is a set, fuzzy or sharp, because in both cases there is always a sharp cut off point between being in the set (to some degree or other), and not being in it.

I’ve been thinking that maybe we shouldn’t be thinking about things set theoretically, but topologically – after all topology is the mathematical study of boundaries. Anyway, I’m not sure if this will work, but I’m going to throw it out there. I certainly haven’t thought it through!

Let’s start off with a metric space $\langle S, c(x, y) \rangle$. S is our set of states – for now we will work in a supervaluationist framework and take them to be precisifications of the language. c(x, y) is a closeness metric telling how similar (measured in real numbers) two states are to one another. Now let $\mathcal{O}$ be the regular open set lattice from the standard ball topology over $S$. Elements of $\mathcal{O}$ will serve as the denotata of sentences of our language. They are desirable for two reasons. Firstly they are necessarily ‘blurry’, that is, they are regions of our precisification space, as opposed to ‘points’. Regions represent a range of precisifications, whereas a point would represent a maximally specific way in which language is completely sharpened (we assume for now that our language contains only vague predicates.) Secondly, there are no boundaries between these regions. For example there is no boundary between the region corresponding to x being red, and its complement in $\mathcal{O}$, the region corresponding to x not being red. We want to capture the idea that there is no last red thing and no first non-red thing (across a rainbow for example.) Although we constructed these regions from maximally specific precisifications, the idea is to take the regions as primitive.

We shall give the semantics as follows. Define $\sigma := Int \circ Cl$, the composition of the interior and the closure operation. Let $D$ be the domain of discourse.

• $\mbox{ }[ \![ P^n_i]\!] : D^n \rightarrow \mathcal{O}$
• $\mbox{ }[ \! [ \neg \phi ] \! ] := \sigma (S \setminus [ \! [\phi ] \! ])$
• $\mbox{ }[ \! [ \phi \wedge\psi]\! ] := [\! [\phi ]\! ] \cap [\! [ \psi ]\! ]$
• $\mbox{ }[ \! [ \phi \vee \psi]\! ] := \sigma([\! [\phi ]\! ] \cup [\! [ \psi ]\! ] )$
• $\mbox{ }[ \! [ \forall x\phi]\! ] := \sigma(\bigcap_{x \in D} [\! [\phi(x) ]\! ])$
• $\mbox{ }[ \! [ \exists x\phi]\! ] := \sigma(\bigcup_{x \in D} [\! [\phi(x) ]\! ])$

The logic will be classical since $\mathcal{O}$ is a complete Boolean algebra under the operations given above. It will, however, become non-classical if we add in precise predicates (whose semantic values can take non regular open sets.) I don’t know how this this particular semantics will pan out in the long run though. For example, with predicates like “red” it’s ok to have no boundaries between the red and not red, but with vague discreet predicates, like “small number” it looks like you might end up with there being no small numbers. Anyway, I was hoping that something in this spirit might be able to put the ‘no boundaries’ conception of vague predicates on a firmer footing, even if it doesn’t ultimately work. Any thoughts would be welcome…

1. This is very interesting. One might wonder how you would philosophically motivate elements of $\mathcal{O}$ as the “semantic values” of sentences in our language. In the supervaluational case where we supervaluate over valuations defined over the reals, it’s natural to interpret the real numbers as roughly modelling “degrees of truth”. But what would be the analogue here?