Maybe we can think of the verificationist as posing a certain challenge:
Whenever we are in doubt whether a sentence S or expression e is cognitively meaningful, we know that S, and any atomic sentence in which e occurs (transparently), is not observational. Non-observational sentences are cognitively meaningful in a language or theory only if they bear in certain relations to other sentences in the language or the theory. (Just as words become meaningful in virtue of occurring in certain sentential contexts, non-observational sentences become meaningful in virtue of occurring in certain theoretical or linguistic contexts.) The relation of interest is probably something like probabilistic non-independence, although it might turn out to be something slightly different. But it is clear that non-observational sentences do not qualify as cognitively meaningful if they bear in that relation to just any other sentences in just any language or theory. For instance, we might posit a theory, or a language, in which there is a class of observational sentences, and a disjoint class of non-observational sentences. Suppose that the theory or language specifies the probabilistic relationships between each member of the latter class, and that every sentence of the latter class is probabilistically independent of every sentence of the first class. If this our strongest theory containing the sentences of the latter class, or if there are no other facts about the truth- or assertibility-conditions of these sentences in the language, then it is clear that the sentences do not qualify as cognitively meaningful if they bear in the probabilistic relations that they do. But then what sort of sentences does a sentence need to be probabilistically non-independent of, in a theory or language, in order to be cognitively meaningful in that theory or language?
The verificationist answer: the observation sentences. The verificationist challenge: what else could it be?
Tuesday, November 18, 2008
Monday, October 20, 2008
Dispositional Terms and Religious Language
In a discussion of Aquinas' account of religious language, Copleston compares describing a dog as intelligent and describing God as intelligent. Following Aquinas, he calls both descriptions "analogical", apparently indicating by this that the amount of intelligence necessary for a thing to be intelligent is somehow relative to what sort of thing it is. (He also seems to think that, since the two descriptions are analogical, neither assigns "intelligent" its ordinary, literal semantic value. That's weird, but needn't detain us.) He notes a further similarity between the dog's intelligence and God's intelligence. Both of these facts are somehow helpfully elucidated by pointing to the material effects they have had - in the former case, on the dog's behavior; in the latter case, on the (putative) goodness and orderliness of Creation. Copleston's attitude seems to be that this elucidation is semantic. By enumerating more and more of (certain of?) the effects of the dog's intelligence or God's intelligence, we characterize with greater and greater precision what "the dog is intelligent" or "God is intelligent" means, or perhaps what people mean when they call the dog or God intelligent.
Still, Copleston holds that, at least in the case of God, this sort of elucidation can't yield an "adequate positive explanation" of (the meaning of) any sentences describing God as intelligent. This called to mind Goodman's work on dispositional terms. Take an uncontroversially dispositional term like "flammable". One of Goodman's ideas (a little roughly) is that "flammable" picks out whatever property a thing has in virtue of which, in certain relevant circumstances, it lights on fire. Importantly, we can call a thing flammable without knowing exactly what that property is, or being able to give an (adequate) positive explanation of what it is in other terms. It would be too harsh, in such a case, to say that we don't know the meaning of "x is flammable", or what people mean when they utter sentences of that form. Discovering that a certain substance contains hydrogen would certainly furnish something we might call an "adequate positive explanation" of its flammability, but that doesn't mean that our flammability-talk, prior to that discovery, was in any sort of semantic error.
There is nothing special about "flammable" here. Roughly, for any set of truths T, and any thing x, we can define a dispositional predicate "F", such that "F(x)" means that x has a certain property F, such that each element of T is an effect of x's having that property. Now, this will raise all sorts of problems if it turns out that the elements of T obtain for reasons that have nothing to do with x. But, in many cases, this way of introducing a dispositional predicate is totally harmless.* I think Copleston should say that God-talk is just such a case. If, like Copleston and Aquinas, we think we know that all sorts of facts about the world are demonstrably effects of God's general nature and particular actions (perhaps logically following from that nature), then, for any class of such facts, we can define a healthy dispositional predicate for God. If, furthermore, we can specify all (or certain interesting subsets) of the effects that elucidate "God is intelligent", then we can give a decent dispositional semantics for that sentence in line with the schema just described. No problems here, as far as I can see.
I think all this has confirmed a suspicion I already had. If God-talk is non-cognitive, it probably isn't because we can't make sense of the predicates we apply to God. The problem, rather, is probably that God is a difficult thing to refer to, or that the predicate "is a god" in particular (at least in modern Judaeo-Christian discourse) doesn't contribute to the truth-conditions of sentences or utterances in any obvious way. It certainly isn't obvious that the name "God" or the predicate "is a god" are susceptible of the same analysis we have given of (certain) theological predicates.
______________________
* - It may turn out that our introduction of F fails to cut nature at the joints. This will be the case when the property of x in virtue of which all of the elements of T obtain is highly disjunctive - if several more fundamental or primitive properties of x are individually responsible for several subsets of T. But F-ness doesn't have to be fundamental in order for "F(x)" to be true.
Still, Copleston holds that, at least in the case of God, this sort of elucidation can't yield an "adequate positive explanation" of (the meaning of) any sentences describing God as intelligent. This called to mind Goodman's work on dispositional terms. Take an uncontroversially dispositional term like "flammable". One of Goodman's ideas (a little roughly) is that "flammable" picks out whatever property a thing has in virtue of which, in certain relevant circumstances, it lights on fire. Importantly, we can call a thing flammable without knowing exactly what that property is, or being able to give an (adequate) positive explanation of what it is in other terms. It would be too harsh, in such a case, to say that we don't know the meaning of "x is flammable", or what people mean when they utter sentences of that form. Discovering that a certain substance contains hydrogen would certainly furnish something we might call an "adequate positive explanation" of its flammability, but that doesn't mean that our flammability-talk, prior to that discovery, was in any sort of semantic error.
There is nothing special about "flammable" here. Roughly, for any set of truths T, and any thing x, we can define a dispositional predicate "F", such that "F(x)" means that x has a certain property F, such that each element of T is an effect of x's having that property. Now, this will raise all sorts of problems if it turns out that the elements of T obtain for reasons that have nothing to do with x. But, in many cases, this way of introducing a dispositional predicate is totally harmless.* I think Copleston should say that God-talk is just such a case. If, like Copleston and Aquinas, we think we know that all sorts of facts about the world are demonstrably effects of God's general nature and particular actions (perhaps logically following from that nature), then, for any class of such facts, we can define a healthy dispositional predicate for God. If, furthermore, we can specify all (or certain interesting subsets) of the effects that elucidate "God is intelligent", then we can give a decent dispositional semantics for that sentence in line with the schema just described. No problems here, as far as I can see.
I think all this has confirmed a suspicion I already had. If God-talk is non-cognitive, it probably isn't because we can't make sense of the predicates we apply to God. The problem, rather, is probably that God is a difficult thing to refer to, or that the predicate "is a god" in particular (at least in modern Judaeo-Christian discourse) doesn't contribute to the truth-conditions of sentences or utterances in any obvious way. It certainly isn't obvious that the name "God" or the predicate "is a god" are susceptible of the same analysis we have given of (certain) theological predicates.
______________________
* - It may turn out that our introduction of F fails to cut nature at the joints. This will be the case when the property of x in virtue of which all of the elements of T obtain is highly disjunctive - if several more fundamental or primitive properties of x are individually responsible for several subsets of T. But F-ness doesn't have to be fundamental in order for "F(x)" to be true.
Tuesday, October 7, 2008
Two Types of Lexical Ambiguity?
I'm reading Francois Recanati's "Unarticulated Constituents" and his discussion of the verb "eats" has gotten me thinking. "Eats" can occur both transitively and intransitively. When "eats" occurs transitively, we can represent its extension by eats2 - the set of all ordered pairs of eaters and the food they are eating. When "eats" occurs intransitively, however, Recanati suggests that we represent its extension by eats1 - the set of all eaters. The basis for his suggestion is that context need not provide a particular food that the speaker wishes to state that, e.g., Tim is eating when she utters "Tim eats".
I have my worries about the effectiveness of this case in support of Recanati's larger argument, but I'll assume the analysis for now. The situation is that intransitive "eats" refers to eats1 (or the property whose extension is eats1) and transitive "eats" refers to eats2 (or the relation whose extension is eats2). Is "eats" (lexically) ambiguous? In favor of an ambiguity, note that "eats" can refer to two different relations, and (or which) can have two extensions with fundamentally different structures - one is a class of individuals, the other a class of ordered pairs. The sentence "Eat!" seems to be ambiguous between the two readings, but it is hard to say that this ambiguity is structural - the sentence has no (surface) structure to speak of.
On the other hand, I think we can fix the reference of "eats" generally by a simple rule - if "eats" is intransitive, it refers to (the property whose extension is) eats1, and if "eats" is transitive, it refers to (the property whose extension is) eats2. Notably, sentences with "eats" are ambiguous between an assignment of eats1 or eats2 to "eats" whenever they are structurally ambiguous between a transitive and an intransitive reading.*
I suppose we can say there are two types of lexical ambiguity - referential ambiguity and semantic ambiguity. "Eats" is referentially ambiguous because it can refer to two sets, which are structurally quite different. It might not be semantically ambiguous because there is one simple rule that either fixes the reference of "eats" in a context, or gives the meaning of "eats" - the word has one reference-fixer, or one meaning.
________________________
* - Consider:
(1) If he eats an apple a dollar.
We can imagine a situation in which (1) is ambiguous between "If he eats an apple, it will cost a dollar" and "If he eats, it will cost an apple a dollar". (I have deliberately removed the punctuation, which would give away the intended reading.) Similarly with "Eat!" above. In these cases, any ambiguity in assigning eats1 or eats2 to "eats" can be chalked up to the structural ambiguity in the sentence.
I have my worries about the effectiveness of this case in support of Recanati's larger argument, but I'll assume the analysis for now. The situation is that intransitive "eats" refers to eats1 (or the property whose extension is eats1) and transitive "eats" refers to eats2 (or the relation whose extension is eats2). Is "eats" (lexically) ambiguous? In favor of an ambiguity, note that "eats" can refer to two different relations, and (or which) can have two extensions with fundamentally different structures - one is a class of individuals, the other a class of ordered pairs. The sentence "Eat!" seems to be ambiguous between the two readings, but it is hard to say that this ambiguity is structural - the sentence has no (surface) structure to speak of.
On the other hand, I think we can fix the reference of "eats" generally by a simple rule - if "eats" is intransitive, it refers to (the property whose extension is) eats1, and if "eats" is transitive, it refers to (the property whose extension is) eats2. Notably, sentences with "eats" are ambiguous between an assignment of eats1 or eats2 to "eats" whenever they are structurally ambiguous between a transitive and an intransitive reading.*
I suppose we can say there are two types of lexical ambiguity - referential ambiguity and semantic ambiguity. "Eats" is referentially ambiguous because it can refer to two sets, which are structurally quite different. It might not be semantically ambiguous because there is one simple rule that either fixes the reference of "eats" in a context, or gives the meaning of "eats" - the word has one reference-fixer, or one meaning.
________________________
* - Consider:
(1) If he eats an apple a dollar.
We can imagine a situation in which (1) is ambiguous between "If he eats an apple, it will cost a dollar" and "If he eats, it will cost an apple a dollar". (I have deliberately removed the punctuation, which would give away the intended reading.) Similarly with "Eat!" above. In these cases, any ambiguity in assigning eats1 or eats2 to "eats" can be chalked up to the structural ambiguity in the sentence.
Tuesday, September 16, 2008
Moore on "Good"
In Moore’s defense of the primitiveness of “good”, he draws an analogy between the good and the yellow:
“My point is that ‘good’ is a simple notion, just as ‘yellow’ is a simple notion; that, just as you cannot, by any manner of means, explain to any one who does not already know it, what yellow is, so you cannot explain what good is.” (PE, 7)
What is interesting is not so much the truth of what Moore says as the questions it raises. One pertinent observation is that you can certainly explain what “yellow” means to someone who doesn’t already know it. One such potential explanation is:
(1) “Yellow” (in English) is synonymous with “amarillo” in Spanish.
Other explanations are not clearly either explanations of the meaning of “yellow” or explanations of what yellow is. Consider:
(2) Yellow is the color of objects such as ripe bananas and “Yield” signs.
Or, to give a different sort of explanation of what yellow is, consider:
(3) An object is (the color) yellow just in case it reflects light at a wavelength of roughly 597-577 nm.
In some sense, we could even “explain” (or, perhaps less controversially, teach) what yellow is to any reasonably intelligent pre-linguistic creature that perceives something like color. We could condition assorted blue (or, if you like, any non-yellow) stimuli with some aversive feedback and condition assorted yellow stimuli with some appetitive feedback. After a while, if the animal chose the yellow stimulus over the blue (or non-yellow) stimulus in every case, it would be clear that the animal could tell the difference between yellow things and non-yellow things. If, as seems reasonable, we take being able to tell the difference between yellow and non-yellow things as sufficient for knowing what yellow is, then we would have taught the animal what yellow is (if it didn’t already know). I once heard Marc Hauser describe such an experiment with bees (the colors were blue and red). Note also that this would be an explanation of what yellow is without being an explanation of what “yellow” means.
But note that, in every case, the explanation of what “yellow” means or what yellow is will require that some prior conditions obtain. In order for (1) to work, the student will have to know what “amarillo” means in Spanish. In order for (2) to work in the ordinary way, the student will have to know the color of ripe bananas and “Yield” signs. In that case, if the explanation works, the student will be able to distinguish yellow from non-yellow objects, as long as nothing weird happens to the lighting or her perceptual apparatus. In order for (2) to work in a less ordinary way, the student will have to know only, roughly, what ripe bananas and “Yield” signs are and what things are the same color as ripe bananas and “Yield” signs. In order for (3) to work, the student will have to know what it is to reflect light at 597-577 nm. In order for the explanation to the pre-linguistic creature to work, they will have to have whatever ability it is that enables them to be conditioned in the manner described. It might be objected against some of these “explanations” that the knowledge they yield of what yellow isn’t substantive enough, since they don’t require something like perceptual acquaintance with yellowness. If we want to say that to know what yellow is is to know what it is like to see yellow objects (which might be what Moore had in mind when he was referring to “yellow” as a “simple notion”), then we will have to admit that only some of these explanations will work, and only on certain conditions. In particular, (1) will work only if, so to speak, the student already knows what it is like to see “amarillo” objects; (2) will work only if the student knows what it is like to see (the colors of) ripe bananas and “Yield” signs; and (3) won’t typically work at all. I agree with Moore that the concept of a “simple notion” is probably best explicated or understood in terms of how it can be explained. If yellow is a simple notion, then if a notion can be explained in just the same ways as yellow, then that notion is simple.
Then the interesting question is, To what extent does the analogy between “good” and “yellow” hold? If the analogy holds all the way – if what goodness is can be explained only in ways precisely parallel to the ways in which what yellow is can be explained – then I think we have to say something like the following.
We can explain what good is or what “good” means to a student in a number of different ways. We can certainly give her a synonym (as in (1) above). Alternatively, we can give her something of the form:
(2`) Goodness is the F of objects such as x, y…
Here, we would have to substitute for “F” some relevant second-order property of goodness (just as color is the relevant second-order property of yellow), and substitute for “x”, “y”, and so forth a sequence of descriptions of sufficiently diverse good objects, actions, or states of affairs. What would make it “sufficiently diverse” would depend on how effective it is in getting the student to understand what goodness is, and also on one’s preferred understanding of knowing what goodness is. Again, we could have some explanation of the form:
(3`) x is good just in case p.
Here we would substitute for p something that holds whenever the substitution instance of x refers to something good.
Obviously, given that the “explanation by synonymy” will only hold in somewhat trivial cases, the difficulty in claiming that the analogy holds in full is in specifying, or at least giving an existence proof, of the relevant values of the variables in (2`) and (3`). For instance, in the explanation of what yellow is in (2), the description of yellow as a color is doing a lot of work. It is not at all clear that there is any second-order property of goodness that could do a similar job in an explanation of the form (2`). And for explanations of the form (3`), it is clear that any substitution-instance for p will be controversial, if at all plausible.
On the other hand, if the analogy fails, we’ll need some good explanation of why it fails. Also, if these explanations of what good is don’t work, then we’ll need some story about how children come to understand what good is, or, failing that, what “good” means, which we do not yet have.
“My point is that ‘good’ is a simple notion, just as ‘yellow’ is a simple notion; that, just as you cannot, by any manner of means, explain to any one who does not already know it, what yellow is, so you cannot explain what good is.” (PE, 7)
What is interesting is not so much the truth of what Moore says as the questions it raises. One pertinent observation is that you can certainly explain what “yellow” means to someone who doesn’t already know it. One such potential explanation is:
(1) “Yellow” (in English) is synonymous with “amarillo” in Spanish.
Other explanations are not clearly either explanations of the meaning of “yellow” or explanations of what yellow is. Consider:
(2) Yellow is the color of objects such as ripe bananas and “Yield” signs.
Or, to give a different sort of explanation of what yellow is, consider:
(3) An object is (the color) yellow just in case it reflects light at a wavelength of roughly 597-577 nm.
In some sense, we could even “explain” (or, perhaps less controversially, teach) what yellow is to any reasonably intelligent pre-linguistic creature that perceives something like color. We could condition assorted blue (or, if you like, any non-yellow) stimuli with some aversive feedback and condition assorted yellow stimuli with some appetitive feedback. After a while, if the animal chose the yellow stimulus over the blue (or non-yellow) stimulus in every case, it would be clear that the animal could tell the difference between yellow things and non-yellow things. If, as seems reasonable, we take being able to tell the difference between yellow and non-yellow things as sufficient for knowing what yellow is, then we would have taught the animal what yellow is (if it didn’t already know). I once heard Marc Hauser describe such an experiment with bees (the colors were blue and red). Note also that this would be an explanation of what yellow is without being an explanation of what “yellow” means.
But note that, in every case, the explanation of what “yellow” means or what yellow is will require that some prior conditions obtain. In order for (1) to work, the student will have to know what “amarillo” means in Spanish. In order for (2) to work in the ordinary way, the student will have to know the color of ripe bananas and “Yield” signs. In that case, if the explanation works, the student will be able to distinguish yellow from non-yellow objects, as long as nothing weird happens to the lighting or her perceptual apparatus. In order for (2) to work in a less ordinary way, the student will have to know only, roughly, what ripe bananas and “Yield” signs are and what things are the same color as ripe bananas and “Yield” signs. In order for (3) to work, the student will have to know what it is to reflect light at 597-577 nm. In order for the explanation to the pre-linguistic creature to work, they will have to have whatever ability it is that enables them to be conditioned in the manner described. It might be objected against some of these “explanations” that the knowledge they yield of what yellow isn’t substantive enough, since they don’t require something like perceptual acquaintance with yellowness. If we want to say that to know what yellow is is to know what it is like to see yellow objects (which might be what Moore had in mind when he was referring to “yellow” as a “simple notion”), then we will have to admit that only some of these explanations will work, and only on certain conditions. In particular, (1) will work only if, so to speak, the student already knows what it is like to see “amarillo” objects; (2) will work only if the student knows what it is like to see (the colors of) ripe bananas and “Yield” signs; and (3) won’t typically work at all. I agree with Moore that the concept of a “simple notion” is probably best explicated or understood in terms of how it can be explained. If yellow is a simple notion, then if a notion can be explained in just the same ways as yellow, then that notion is simple.
Then the interesting question is, To what extent does the analogy between “good” and “yellow” hold? If the analogy holds all the way – if what goodness is can be explained only in ways precisely parallel to the ways in which what yellow is can be explained – then I think we have to say something like the following.
We can explain what good is or what “good” means to a student in a number of different ways. We can certainly give her a synonym (as in (1) above). Alternatively, we can give her something of the form:
(2`) Goodness is the F of objects such as x, y…
Here, we would have to substitute for “F” some relevant second-order property of goodness (just as color is the relevant second-order property of yellow), and substitute for “x”, “y”, and so forth a sequence of descriptions of sufficiently diverse good objects, actions, or states of affairs. What would make it “sufficiently diverse” would depend on how effective it is in getting the student to understand what goodness is, and also on one’s preferred understanding of knowing what goodness is. Again, we could have some explanation of the form:
(3`) x is good just in case p.
Here we would substitute for p something that holds whenever the substitution instance of x refers to something good.
Obviously, given that the “explanation by synonymy” will only hold in somewhat trivial cases, the difficulty in claiming that the analogy holds in full is in specifying, or at least giving an existence proof, of the relevant values of the variables in (2`) and (3`). For instance, in the explanation of what yellow is in (2), the description of yellow as a color is doing a lot of work. It is not at all clear that there is any second-order property of goodness that could do a similar job in an explanation of the form (2`). And for explanations of the form (3`), it is clear that any substitution-instance for p will be controversial, if at all plausible.
On the other hand, if the analogy fails, we’ll need some good explanation of why it fails. Also, if these explanations of what good is don’t work, then we’ll need some story about how children come to understand what good is, or, failing that, what “good” means, which we do not yet have.
Anselm, Arguments from Analyticity
Assume that Anselm's argument is valid - that it follows from the definition of "God" that God exists. In that case, couldn't we simply refuse to use the word "God" that Anselm has defined for us? If one of the premises of the argument is a definition, and definitions are constitutive of languages, isn't one rebuttal just not to speak an Anselmian language? If someone defined "phlogiston" so that it necessarily existed, she wouldn't thereby have a knock-down argument against the ontology of modern biology. She would have a linguistic quirk to be corrected or ignored.
A similar objection comes to mind in certain ethical and epistemological discussions, where it is argued that some substantive theory is just analytic. Someone might argue that it follows from the meanings of the words "will" and "good" that the object of ethical judgment is always the will, or that it follows from the meanings of the words "state" and "know" that we should state only what we know to be true. In response to this, can't we always simply refuse to use the relevant words with the meanings they are taken to have? Sometimes philosophers focus so much on the meanings and entailments associated with particular words because we're interested in better understanding the conceptual scheme we actually employ. But at some point, in these sorts of discussions of folk vocabulary, isn't it available to us to make new concepts? If we're clever enough, can't we cook up a "know" or a "good" that is better suited to our purposes than the "know" and "good" we have received from our ancestors?
I've assumed here that, at the crucial point, someone can't make a transcendental argument that we have to keep our folk vocabulary just as it is. But what form could such an argument take?
A similar objection comes to mind in certain ethical and epistemological discussions, where it is argued that some substantive theory is just analytic. Someone might argue that it follows from the meanings of the words "will" and "good" that the object of ethical judgment is always the will, or that it follows from the meanings of the words "state" and "know" that we should state only what we know to be true. In response to this, can't we always simply refuse to use the relevant words with the meanings they are taken to have? Sometimes philosophers focus so much on the meanings and entailments associated with particular words because we're interested in better understanding the conceptual scheme we actually employ. But at some point, in these sorts of discussions of folk vocabulary, isn't it available to us to make new concepts? If we're clever enough, can't we cook up a "know" or a "good" that is better suited to our purposes than the "know" and "good" we have received from our ancestors?
I've assumed here that, at the crucial point, someone can't make a transcendental argument that we have to keep our folk vocabulary just as it is. But what form could such an argument take?
Friday, August 29, 2008
Two Worries for MacKinnon
I’ve been reading Catharine MacKinnon’s Feminism Umodified, and had a couple of ideas. One was in response to a bit from the great talk, “Desire and Power”.
“Similarly, to say that not only women experience something – for example, to suggest that because some men are raped rape in not an act of male dominance [over a victim in a female social role?] – only suggests that the status of women is not biological. Men can be feminized, too, and and they know they are when they are raped.’ (56)
MacKinnon seems to be claiming that a man, when raped, is always raped as feminine, i.e. his experience of the rape or the status of the rape is as of woman’s experience or status in society. This just seems to me quite false. Surely, if there are such things as the social role of woman and the social role of man, then there are socially womanly characteristics in a man’s experience of being raped – powerlessness, enforced silence, passiveness, penetrability, perhaps being taken as sexually available regardless of one’s own interests. But there must be distinctly male characteristics of this experience, and I think these are socially, and not (or not just) biologically male characteristics. Men are probably more likely to be believed than women when they report being raped, but they presumably still fail to report some rapes. I think (and MacKinnon should think) that this would typically be for socially male reasons – a desire not to be seen as powerless, not to be treated as a victim. If a man is ashamed of being raped, I imagine it is not typically shame at being ruined, as a woman might be socially pressured to feel. It would typically be shame, again, for distinctly male reasons – failure to overcome one’s rapist, for instance. This is a small point, though.
Another, bigger problem with MacKinnon’s theory is that it seems incapable of explaining that rape, child molestation, incest, prostitution, and pornography (and even in some cases job- and pay- discrimination) are taboo and frowned on. If the dominance of man over woman is best evidenced in society by male-female sexual violence, and feminism is necessary because, among other things, the dominance of man would go unchallenged except by feminism, then why is it that non-feminist forces suppress male-female sexual violence, even to the extent that they do? The equality principle can’t explain these taboos, on her interpretation of how it is accepted in society, since MacKinnon wants to say that the deleterious function of the equality principle is that women and men are often too unlike, especially with respect to their proneness to sexual violence, to be treated as likes. I imagine MacKinnon might say that there aren’t any (or very many) non-feminist forces suppressing male-female sexual violence, but I would disagree. This suppression is preached in church, enshrined in the law (even if the law isn’t systematically enforced), and acknowledged, at least in cases other than pornography, by society’s male-dominated moral discourse (a discourse which MacKinnon apparently believes also to be male in its characteristic social role). The force of this suppression is non-trivial. It is not as if there is a clear explanation of these phenomena that MacKinnon’s theory just can’t countenance; I think these facts really are difficult to explain. What seems to be the case, then, is that there are social forces other than feminism that combat at least some of the more pernicious effects of male dominance. (We might, alternatively, take the above to suggest that male dominance is not, in fact, expressed in those instances of sexual violence which (male) society seems to disapprove of, but I think this would be too severe.) Isolating what these forces are might be of considerable use to feminism. What forces guide the church, the law, and male moral discourse to oppose sexual violence? How, if at all, can feminism put these forces to its own use?
“Similarly, to say that not only women experience something – for example, to suggest that because some men are raped rape in not an act of male dominance [over a victim in a female social role?] – only suggests that the status of women is not biological. Men can be feminized, too, and and they know they are when they are raped.’ (56)
MacKinnon seems to be claiming that a man, when raped, is always raped as feminine, i.e. his experience of the rape or the status of the rape is as of woman’s experience or status in society. This just seems to me quite false. Surely, if there are such things as the social role of woman and the social role of man, then there are socially womanly characteristics in a man’s experience of being raped – powerlessness, enforced silence, passiveness, penetrability, perhaps being taken as sexually available regardless of one’s own interests. But there must be distinctly male characteristics of this experience, and I think these are socially, and not (or not just) biologically male characteristics. Men are probably more likely to be believed than women when they report being raped, but they presumably still fail to report some rapes. I think (and MacKinnon should think) that this would typically be for socially male reasons – a desire not to be seen as powerless, not to be treated as a victim. If a man is ashamed of being raped, I imagine it is not typically shame at being ruined, as a woman might be socially pressured to feel. It would typically be shame, again, for distinctly male reasons – failure to overcome one’s rapist, for instance. This is a small point, though.
Another, bigger problem with MacKinnon’s theory is that it seems incapable of explaining that rape, child molestation, incest, prostitution, and pornography (and even in some cases job- and pay- discrimination) are taboo and frowned on. If the dominance of man over woman is best evidenced in society by male-female sexual violence, and feminism is necessary because, among other things, the dominance of man would go unchallenged except by feminism, then why is it that non-feminist forces suppress male-female sexual violence, even to the extent that they do? The equality principle can’t explain these taboos, on her interpretation of how it is accepted in society, since MacKinnon wants to say that the deleterious function of the equality principle is that women and men are often too unlike, especially with respect to their proneness to sexual violence, to be treated as likes. I imagine MacKinnon might say that there aren’t any (or very many) non-feminist forces suppressing male-female sexual violence, but I would disagree. This suppression is preached in church, enshrined in the law (even if the law isn’t systematically enforced), and acknowledged, at least in cases other than pornography, by society’s male-dominated moral discourse (a discourse which MacKinnon apparently believes also to be male in its characteristic social role). The force of this suppression is non-trivial. It is not as if there is a clear explanation of these phenomena that MacKinnon’s theory just can’t countenance; I think these facts really are difficult to explain. What seems to be the case, then, is that there are social forces other than feminism that combat at least some of the more pernicious effects of male dominance. (We might, alternatively, take the above to suggest that male dominance is not, in fact, expressed in those instances of sexual violence which (male) society seems to disapprove of, but I think this would be too severe.) Isolating what these forces are might be of considerable use to feminism. What forces guide the church, the law, and male moral discourse to oppose sexual violence? How, if at all, can feminism put these forces to its own use?
Monday, August 25, 2008
Philosophers' Carnival LXXVI
Welcome, readers, to the 76th fortnightly Philosophers’ Carnival!
Enigman asks what philosophical reasons mathematicians have for assuming the axiom of infinity in his post Philosophy of Mathematics. It’s not clear what sorts of reasons he’s looking for; fundamental questions about mathematical truth and the role of axioms seem to be lurking just below the surface here. The comments thread hasn’t grown prohibitively long yet, so hop on over and pitch in your $.02.
Alexander Pruss criticizes several ways of construing the supernaturalness of magic in Magic, science, and the supernatural. I’m not convinced by a lot of what he says, but the discussion is very clear and open-minded. Peruse the other entries while you’re there, if you haven’t visited before – it’s a nice blog.
Avery Archer works on a theory of rational agency in Why Questions and Rational Agents (more about the latter than the former). I like this post, even though I don’t like a lot (of the little) I have read elsewhere on rationality. It’s not clear to me that the appearances of the good (allegedly) involved in desire are reflections of a perspective held by some subsystem of an agent which is involved in producing the agent’s desires, just because I’m not sure that subsystems of agents are the sorts of things that can have perspectives. This might be a quibble. When a person’s reasoned course of action conflicts with her desires, there obviously does seem to be some sub-agential system bearing some interesting relationship to the course of action desired but not taken, or a mental representation of that course of action. It might be useful to spell out what is not quite “perspectival” about that relationship, though. I have more to say about this, but you don’t need to read it.
Over at Possibly Philosophy, Andrew Bacon weighs in on Counterexamples to Modus Ponens. I’m not sure I understand why he thinks that a syntactic characterization of modus ponens won’t work, and I don’t understand accessibility (between possible worlds) well enough to follow the rest of the argument. The McGee counterexample is super-interesting, though, and deserves attention from those of you out there with more logical competence than your humble host.
Thom Brooks of The Brooks Blog lets us in on his Five Secrets to Publishing Success, published on InsideHigherEd.com. Helpful to those looking for, well, publishing success.
Richard Chappell offers a brief but convincing discussion of Fair Shares and Others’ Responsibilities. He argues that, in the interest of fairness, we should pick up the slack for others’ moral failings. I think I agree, although I do not live up to the conclusion in my own life. Also, it’s not clear to me how well this sits with Richard’s views on the “demandingness objection” and the permissibility of living a basically decent life expressed here.
Bryan Norwood presents some objections to epistemological internalism, with an alternative, in Internalist Justification vs. Virtuoso Expertise. There is a lot I don’t understand here – the distinction between subjective and objective blame, the relationships between foundationalism and this distinction, and the relation between internalism and K = JTB. Still, I think there are some good ideas about epistemic blameworthiness brewing here.
Chris Hallq discusses Gettier and the purpose of analyzing “knowledge” in The case against Gettier. Some of the literature on what knowledge is for – the relation between knowledge and assertion, or knowledge and the attribution of other factive mental states – could help here. Still, the basic point, that philosophers interested in a concept need to keep the distinctive intended uses of the concept, is worth reiterating.
Lastly, Gualtiero Piccinini disambiguates “connectionism” for us, and spells out some of the morals of the disambiguation in The Ambiguity of "Connectionism". I was taught that connectionism is the view that the brain does most everything using Parallel Distributed Processing, but these other senses of “connectionism” are useful to distinguish as well.
That wraps up this edition of the Philosophers’ Carnival. If you’re still jonesing for more philosophy after all that, I invite you to check out some of the posts here on Think It Over. And, as always, keep your eye out for the next edition upcoming at Kenny Pearce’s blog.
Enigman asks what philosophical reasons mathematicians have for assuming the axiom of infinity in his post Philosophy of Mathematics. It’s not clear what sorts of reasons he’s looking for; fundamental questions about mathematical truth and the role of axioms seem to be lurking just below the surface here. The comments thread hasn’t grown prohibitively long yet, so hop on over and pitch in your $.02.
Alexander Pruss criticizes several ways of construing the supernaturalness of magic in Magic, science, and the supernatural. I’m not convinced by a lot of what he says, but the discussion is very clear and open-minded. Peruse the other entries while you’re there, if you haven’t visited before – it’s a nice blog.
Avery Archer works on a theory of rational agency in Why Questions and Rational Agents (more about the latter than the former). I like this post, even though I don’t like a lot (of the little) I have read elsewhere on rationality. It’s not clear to me that the appearances of the good (allegedly) involved in desire are reflections of a perspective held by some subsystem of an agent which is involved in producing the agent’s desires, just because I’m not sure that subsystems of agents are the sorts of things that can have perspectives. This might be a quibble. When a person’s reasoned course of action conflicts with her desires, there obviously does seem to be some sub-agential system bearing some interesting relationship to the course of action desired but not taken, or a mental representation of that course of action. It might be useful to spell out what is not quite “perspectival” about that relationship, though. I have more to say about this, but you don’t need to read it.
Over at Possibly Philosophy, Andrew Bacon weighs in on Counterexamples to Modus Ponens. I’m not sure I understand why he thinks that a syntactic characterization of modus ponens won’t work, and I don’t understand accessibility (between possible worlds) well enough to follow the rest of the argument. The McGee counterexample is super-interesting, though, and deserves attention from those of you out there with more logical competence than your humble host.
Thom Brooks of The Brooks Blog lets us in on his Five Secrets to Publishing Success, published on InsideHigherEd.com. Helpful to those looking for, well, publishing success.
Richard Chappell offers a brief but convincing discussion of Fair Shares and Others’ Responsibilities. He argues that, in the interest of fairness, we should pick up the slack for others’ moral failings. I think I agree, although I do not live up to the conclusion in my own life. Also, it’s not clear to me how well this sits with Richard’s views on the “demandingness objection” and the permissibility of living a basically decent life expressed here.
Bryan Norwood presents some objections to epistemological internalism, with an alternative, in Internalist Justification vs. Virtuoso Expertise. There is a lot I don’t understand here – the distinction between subjective and objective blame, the relationships between foundationalism and this distinction, and the relation between internalism and K = JTB. Still, I think there are some good ideas about epistemic blameworthiness brewing here.
Chris Hallq discusses Gettier and the purpose of analyzing “knowledge” in The case against Gettier. Some of the literature on what knowledge is for – the relation between knowledge and assertion, or knowledge and the attribution of other factive mental states – could help here. Still, the basic point, that philosophers interested in a concept need to keep the distinctive intended uses of the concept, is worth reiterating.
Lastly, Gualtiero Piccinini disambiguates “connectionism” for us, and spells out some of the morals of the disambiguation in The Ambiguity of "Connectionism". I was taught that connectionism is the view that the brain does most everything using Parallel Distributed Processing, but these other senses of “connectionism” are useful to distinguish as well.
That wraps up this edition of the Philosophers’ Carnival. If you’re still jonesing for more philosophy after all that, I invite you to check out some of the posts here on Think It Over. And, as always, keep your eye out for the next edition upcoming at Kenny Pearce’s blog.
Sunday, August 24, 2008
What to Expect from a Theory of Meaning
It’s clear that different thinkers want different things from a semantic theory of a language, or a theory of meaning. I want to schematize the different possible desiderata for a theory of meaning. Here is the beginning of a schema, and a comment on one of its problems.
A good semantic theory (where a theory is construed as a class of sentences) of a language L is/expresses:
(a) What we must understand
(b) What we must know
(c) What we must believe
(d) What we must know-true
(e) What we must believe-true
(f) What we must act as if true
in order to
(1) Understand the sentences of L
(2) Know the meanings of the sentences of L.
A conception of the goals of semantic theory comes from picking "is" or expresses in the first clause, one item on the lettered list, and one item on the numbered list.
I include (f) for those who think that semantics is a branch of sociology, not psychology – who think that the meanings of sentences are not mental representations or attitudes towards mental representations, or are not best studied through mental representations of meaning or attitudes towards those representations, or are not determined, in any interesting sense, by mental representations or individuals’ attitudes towards them. But I feel like, to accommodate just these sorts of folks, there should be something corresponding to (f) in the numbered list. “Count as a(n expert) speaker of L” or “Count as a member of the linguistic community centered around L” is too strong, because some phonological or pragmatic know-how or behavior goes into these things, but is not properly semantic. I’m at a loss.
There are also more ways of clarifying what we want from a semantic theory by placing different sorts of restrictions on the values of L – idiolects, dialects, the verbal behavior of maximal sets of mutually intelligible speakers, or whatever.
A good semantic theory (where a theory is construed as a class of sentences) of a language L is/expresses:
(a) What we must understand
(b) What we must know
(c) What we must believe
(d) What we must know-true
(e) What we must believe-true
(f) What we must act as if true
in order to
(1) Understand the sentences of L
(2) Know the meanings of the sentences of L.
A conception of the goals of semantic theory comes from picking "is" or expresses in the first clause, one item on the lettered list, and one item on the numbered list.
I include (f) for those who think that semantics is a branch of sociology, not psychology – who think that the meanings of sentences are not mental representations or attitudes towards mental representations, or are not best studied through mental representations of meaning or attitudes towards those representations, or are not determined, in any interesting sense, by mental representations or individuals’ attitudes towards them. But I feel like, to accommodate just these sorts of folks, there should be something corresponding to (f) in the numbered list. “Count as a(n expert) speaker of L” or “Count as a member of the linguistic community centered around L” is too strong, because some phonological or pragmatic know-how or behavior goes into these things, but is not properly semantic. I’m at a loss.
There are also more ways of clarifying what we want from a semantic theory by placing different sorts of restrictions on the values of L – idiolects, dialects, the verbal behavior of maximal sets of mutually intelligible speakers, or whatever.
Wednesday, August 6, 2008
Explications and Empirical Revision
In what senses are explications open to empirical revision?
Say you have a term t and an explication of that term t`. For instance, t could be “heat” and t` could be “mean kinetic energy”. Or t could be “volume” and t` could be “amplitude”. Or t could be “more probable than” and t` could be “has a greater limiting relative frequency than”.
One obvious sense in which the explication t` could be empirically revised is that we could have some theory T1 in which t` occurs essentially, and then, in light of new empirical evidence, we replace T1 with some theory T2 in which t` does not occur. For instance, we could have a Newtonian mechanical theory, in which “heat” is explicated with a certain definition of “kinetic energy”, and (loosely speaking) the evidence could suggest that we replace it with a relativistic theory, in which the definition changes. Or the evidence could suggest that some classical theory of statistical mechanics be replaced by a quantum mechanical theory, in which (so I understand) the frequentist explication of "probability" is not appropriate.
Another slightly stronger sense in which t` could be empirically revised would be this. Suppose we have some theory T, containing the term t, then we replace every occurrence of t in T with the explication t`, producing the theory T`. Then, since T is presumably more vague or ambiguous than T`, there is some conclusive evidence against T` which is not conclusive evidence against T. If we discover just this evidence, then we should abandon T`, but we should not necessarily abandon T. I imagine that some would want to say, in this situation, that they had learned that they had chosen the wrong explication. This would be especially tempting if there were another explication t`` of t, such that the theory T``, gotten by replacing every occurrence of t in T with t``, were true, or supported by all of the relevant evidence at hand.
I feel like these senses aren’t strong enough to capture what someone might want to express by saying that explications are open to empirical revision. An explication is empirically revised in this stronger sense not only if the theories in which it occurs are found to be false, and not only if some competing explication better preserves the truth of theories in which the explicandum occurs. But is there such a sense in which explications are open to empirical revision?
Perhaps the idea is that there is some source of empirical evidence that can tell for or against the decision to explicate a term a certain way that is not empirical evidence for or against any particular theory in which the explicatum is used. This seems unlikely to me. What form could this sort of evidence possibly take? If we choose an explication, say, to render the empirical consequences of a theory more transparent, then we might have evidence that the explication fails or succeeds in doing so. We might have evidence, for instance, that “…t`…” is a borderline case but “…t…” isn’t. But would this be empirical evidence? Can’t we tell borderline cases from the armchair, given enough background information? Note that we can supply the necessary background information ourselves, using thought experiments. We might be incapable of telling what the empirical consequences of some explicated or unexplicated theory are if we don’t know what the theory says. So we should be familiar with theories in which explicanda occur in order to make our explications work best. But is this really empirical knowledge, in the relevant sense? What is necessary is not knowledge that any given theory is true, or knowledge of the evidence for or against a theory, but knowledge of what the theory says. (Of course, what makes what one theory says more or less important than what another theory says is the relative likelihood of the truth of the theories, but philosophers qua explicators don’t generally take on the task of empirically assessing the relative likelihoods of theories – generally, I suppose, they assess the importance of a scientific theory on the basis of its prominence in modern scientific discussion.)
Perhaps the idea is that, since no one sentence in which a (non-observational) term occurs is immune to empirical revision, then no sentence we pick out as the explication of a term is immune to empirical revision. This also seems implausible to me. As I understand it, part of what it is to explicate t with t` is to treat “t = t`” (perhaps relative to a given theory) as if it were the definition of t (in that theory). Part of what that amounts to is agreeing to the eliminability of t for t` in all extensional contexts (in a given theory). When we cease to agree to this, we change the subject. In general, as I see it, the explicit adoption of an explication of a term in a theory is an attempt to escape the consequences of various sorts of holism about that theory. We explicate, in part, to make the meanings of terms less diffuse. Of course, if a popular theory containing t is rendered trivially but empirically false by replacing all occurrences of t with t`, then t` is useless as an explication of t in that theory. Nobody would agree to explicate t with t` in such a case, and everyone’s refusal to adopt the explication would be on empirical grounds.
As it stands, this satisfies me that explications are open to empirical revision only in our first two senses. Am I missing something?
Say you have a term t and an explication of that term t`. For instance, t could be “heat” and t` could be “mean kinetic energy”. Or t could be “volume” and t` could be “amplitude”. Or t could be “more probable than” and t` could be “has a greater limiting relative frequency than”.
One obvious sense in which the explication t` could be empirically revised is that we could have some theory T1 in which t` occurs essentially, and then, in light of new empirical evidence, we replace T1 with some theory T2 in which t` does not occur. For instance, we could have a Newtonian mechanical theory, in which “heat” is explicated with a certain definition of “kinetic energy”, and (loosely speaking) the evidence could suggest that we replace it with a relativistic theory, in which the definition changes. Or the evidence could suggest that some classical theory of statistical mechanics be replaced by a quantum mechanical theory, in which (so I understand) the frequentist explication of "probability" is not appropriate.
Another slightly stronger sense in which t` could be empirically revised would be this. Suppose we have some theory T, containing the term t, then we replace every occurrence of t in T with the explication t`, producing the theory T`. Then, since T is presumably more vague or ambiguous than T`, there is some conclusive evidence against T` which is not conclusive evidence against T. If we discover just this evidence, then we should abandon T`, but we should not necessarily abandon T. I imagine that some would want to say, in this situation, that they had learned that they had chosen the wrong explication. This would be especially tempting if there were another explication t`` of t, such that the theory T``, gotten by replacing every occurrence of t in T with t``, were true, or supported by all of the relevant evidence at hand.
I feel like these senses aren’t strong enough to capture what someone might want to express by saying that explications are open to empirical revision. An explication is empirically revised in this stronger sense not only if the theories in which it occurs are found to be false, and not only if some competing explication better preserves the truth of theories in which the explicandum occurs. But is there such a sense in which explications are open to empirical revision?
Perhaps the idea is that there is some source of empirical evidence that can tell for or against the decision to explicate a term a certain way that is not empirical evidence for or against any particular theory in which the explicatum is used. This seems unlikely to me. What form could this sort of evidence possibly take? If we choose an explication, say, to render the empirical consequences of a theory more transparent, then we might have evidence that the explication fails or succeeds in doing so. We might have evidence, for instance, that “…t`…” is a borderline case but “…t…” isn’t. But would this be empirical evidence? Can’t we tell borderline cases from the armchair, given enough background information? Note that we can supply the necessary background information ourselves, using thought experiments. We might be incapable of telling what the empirical consequences of some explicated or unexplicated theory are if we don’t know what the theory says. So we should be familiar with theories in which explicanda occur in order to make our explications work best. But is this really empirical knowledge, in the relevant sense? What is necessary is not knowledge that any given theory is true, or knowledge of the evidence for or against a theory, but knowledge of what the theory says. (Of course, what makes what one theory says more or less important than what another theory says is the relative likelihood of the truth of the theories, but philosophers qua explicators don’t generally take on the task of empirically assessing the relative likelihoods of theories – generally, I suppose, they assess the importance of a scientific theory on the basis of its prominence in modern scientific discussion.)
Perhaps the idea is that, since no one sentence in which a (non-observational) term occurs is immune to empirical revision, then no sentence we pick out as the explication of a term is immune to empirical revision. This also seems implausible to me. As I understand it, part of what it is to explicate t with t` is to treat “t = t`” (perhaps relative to a given theory) as if it were the definition of t (in that theory). Part of what that amounts to is agreeing to the eliminability of t for t` in all extensional contexts (in a given theory). When we cease to agree to this, we change the subject. In general, as I see it, the explicit adoption of an explication of a term in a theory is an attempt to escape the consequences of various sorts of holism about that theory. We explicate, in part, to make the meanings of terms less diffuse. Of course, if a popular theory containing t is rendered trivially but empirically false by replacing all occurrences of t with t`, then t` is useless as an explication of t in that theory. Nobody would agree to explicate t with t` in such a case, and everyone’s refusal to adopt the explication would be on empirical grounds.
As it stands, this satisfies me that explications are open to empirical revision only in our first two senses. Am I missing something?
Tuesday, July 29, 2008
Rockwell's Anti-Zombie Argument
Over at The Splintered Mind, Teed Rockwell has put up a brief defense of an argument that zombies are impossible. The argument is this:
(1) Zombies are possible if and only if subjective experiences are epiphenomenal.
(2) Subjective experiences are epiphenomenal if and only if we have direct awareness of them.
(3) There is no such thing as direct awareness.
Therefore, etc.
My comments:
Ad (1). This is not true. The closest truth in this vicinity is:
(1`) If it is possible that subjective experiences are epiphenomenal, then zombies are possible.
I can't see why subjective experiences would have to be epiphenomenal in the actual world in order for zombies to be exist in some other world. Why couldn't it be the case that qualia have some functional role in our cognitive architecture in the actual world, but the non-qualitative aspects of cognition subserve that functional role in another world? Perhaps the idea is that properties are individuated by the causal powers they grant when instantiated, so that P = Q only if, necessarily, Px causes p iff Qx causes p. Then a property is actually epiphenomenal iff necessarily epiphenomenal. This is implausible to me, though. Surely some property bestows some causal powers in the actual world that it does not in other worlds. But if that is the case, then qualia could be properties of this sort. They could bestow functional cognitive properties on conscious thinkers in the actual world that they do not in other worlds. Anyway, all this is OK for Rockwell's argument so far, since we only need something of the strength of (1`) for the argument to go through, if (2) and (3) are true.
The real problem is with (2). I suppose what would make (2) plausible is an argument to the effect of:
(4) We are either directly aware of a property's being instantiated, or we infer it.
(5) We infer some fact p on the basis of q only if p causes q.
(6) The instantiations of epiphenomenal properties do not cause anything else.
Therefore, (2).
Then, from (3), we get Rockwell's desired conclusion. As things stand, I lean towards saying that epiphenomenalism is actually false, but might have been true. By (1`), this immediately gets us that zombies are possible. But let's assume that epiphenomenalism is true, and even that (1) is true. In this sort of position, my beef has always been with (5). I always thought that, if epiphenomenalism is true, then we brought qualia into our ontology to explain the justification we have for believing self-ascriptions of certain mental states. The qualitative character of the state of, say, seeing an apple from ten feet away in decent, neutral light is supposed to be what explains how I generally know, or am justified in believing, that I see an apple when I do. This is not to say that the instantiation of a certain quale causes me to believe that I see the apple. Rather, the instantiation of the quale (perhaps given some other conditions about my proper mental functioning) is what provides that belief with its positive epistemic status. But it would be strange to say that the conferral of that epistemic status is a causal matter. I want to say that the instantiation of the quale does not causally affect my belief's epistemic status because the epistemic status of a belief is not the sort of thing that can be caused to be one way or another. I do infer the existence of qualia, in general, but not from their causal roles. I infer them to explain the epistemic status that I confer on mental state self-ascriptions. So qualia are a counterexample to (5) because they are an instance in which inference to the best explanation is not inference from cause to effect.
(1) Zombies are possible if and only if subjective experiences are epiphenomenal.
(2) Subjective experiences are epiphenomenal if and only if we have direct awareness of them.
(3) There is no such thing as direct awareness.
Therefore, etc.
My comments:
Ad (1). This is not true. The closest truth in this vicinity is:
(1`) If it is possible that subjective experiences are epiphenomenal, then zombies are possible.
I can't see why subjective experiences would have to be epiphenomenal in the actual world in order for zombies to be exist in some other world. Why couldn't it be the case that qualia have some functional role in our cognitive architecture in the actual world, but the non-qualitative aspects of cognition subserve that functional role in another world? Perhaps the idea is that properties are individuated by the causal powers they grant when instantiated, so that P = Q only if, necessarily, Px causes p iff Qx causes p. Then a property is actually epiphenomenal iff necessarily epiphenomenal. This is implausible to me, though. Surely some property bestows some causal powers in the actual world that it does not in other worlds. But if that is the case, then qualia could be properties of this sort. They could bestow functional cognitive properties on conscious thinkers in the actual world that they do not in other worlds. Anyway, all this is OK for Rockwell's argument so far, since we only need something of the strength of (1`) for the argument to go through, if (2) and (3) are true.
The real problem is with (2). I suppose what would make (2) plausible is an argument to the effect of:
(4) We are either directly aware of a property's being instantiated, or we infer it.
(5) We infer some fact p on the basis of q only if p causes q.
(6) The instantiations of epiphenomenal properties do not cause anything else.
Therefore, (2).
Then, from (3), we get Rockwell's desired conclusion. As things stand, I lean towards saying that epiphenomenalism is actually false, but might have been true. By (1`), this immediately gets us that zombies are possible. But let's assume that epiphenomenalism is true, and even that (1) is true. In this sort of position, my beef has always been with (5). I always thought that, if epiphenomenalism is true, then we brought qualia into our ontology to explain the justification we have for believing self-ascriptions of certain mental states. The qualitative character of the state of, say, seeing an apple from ten feet away in decent, neutral light is supposed to be what explains how I generally know, or am justified in believing, that I see an apple when I do. This is not to say that the instantiation of a certain quale causes me to believe that I see the apple. Rather, the instantiation of the quale (perhaps given some other conditions about my proper mental functioning) is what provides that belief with its positive epistemic status. But it would be strange to say that the conferral of that epistemic status is a causal matter. I want to say that the instantiation of the quale does not causally affect my belief's epistemic status because the epistemic status of a belief is not the sort of thing that can be caused to be one way or another. I do infer the existence of qualia, in general, but not from their causal roles. I infer them to explain the epistemic status that I confer on mental state self-ascriptions. So qualia are a counterexample to (5) because they are an instance in which inference to the best explanation is not inference from cause to effect.
Hebrew School, Philosophy
Jewish teachers of today cannot, by and large, rely on a religious family culture, nor on an authoritative Jewish community….It is commonly said that education is a reflection of its society. Contemporary Jewish education has the task of creating the very society of which it should be the reflection. Not only must it interpret the received texts, it needs to reinterpret the very conditions of its role, assess the new situation and invent unprecedented methods for meeting it. A repetitive application of traditional approaches will not suffice. There is no substitute for philosophy in this context -- a rethinking of the bases of Jewish life and learning in our times.
- Israel Scheffler, Teachers of My Youth: An American Jewish Experience
- Israel Scheffler, Teachers of My Youth: An American Jewish Experience
Wednesday, July 23, 2008
Some Thoughts About Intuitions
Why are intuitions with narrower content so much more evidentially trustworthy or warrant-conferring than intuitions with broader content? Why is it that if many of our narrower intuitions conflict with some of our broader intuitions, we often tend to drop the latter and not the former?
Better to intuit that “water” doesn’t mean the same thing on Earth as on Twin Earth than to intuit semantic externalism. If we intuit the former, this is better evidence against semantic internalism than an intuition that semantic internalism is true would be evidence against it.
Better to intuit that n grains of sand is not a heap and that if n grains of sand is not a heap, then n + 1 grains of sand is not a heap than to intuit that there are no heaps. In this case, I think, our intuition that there are heaps trumps our intuitions of the premises. Perhaps the premises, combined, are “broader” or stronger than the intuition that there are heaps, since they are recursive, and so have an infinite number of instances. Perhaps that is because I already know that there are heaps on the basis, say, of my perceptual evidence and my intuitions that, in this or that case, what I am looking at is a heap.
Better to intuit that I don’t really know that that is the façade of a barn than to intuit that knowledge is not justified true belief. Sometimes I think that my intuition that knowledge is justified true belief trumps my intuition about fake barn country. Perhaps this is because I sometimes think that K = JTB has too many merits to give up so easily. Of course, most of the time, people take the intuition about fake barn country (along with intuitions about other Gettier cases) to trump K = JTB.
I think these examples show that, in general, narrower intuitions are better evidence for their claims than broader intuitions. They don’t show that a narrower intuition always trumps a conflicting broader intuition, since the broader intuition might have more going for it than the conflicting narrower intuition brings against it. They suggest that a narrower intuition trumps a conflicting broader intuition, ceteris paribus.
How do we explain all of this? Part of it is that a narrower claim just, in general, has more warrant or a higher probability, relative to some piece of evidence, than any broader claim (of which the narrower claim is “an instance”), relative to that same evidence. That is, if I see Mike park the car, that is better evidence that Mike knows how to park a car than it is evidence that everyone named “Mike” knows how to park a car, or that everyone Mike’s age knows how to park a car. So, my intuition that p is, in general, better evidence that p than it is evidence that (a stronger claim) p & q, and better evidence that p & q than it is evidence that (an even stronger claim) p & q & r, and so on. If my intuition is that all bachelors are unmarried men, that is better evidence that all bachelors are unmarried men than it is evidence that, say, all men are unmarried men, or that all bachelors are unmarried men and all male college students are unmarried men.
Part of it is that we get more disagreement with intuitions on broader claims. Since much of the time the conflicting intuitions of two individuals will carry just as much evidential weight, these sorts of disagreements don’t affect the evidential standing of the claims of interest one way or another. Intuitions about semantic internalism and semantic externalism won’t get us very far, because people’s intuitions differ and carry equal evidential weight.
But I think the biggest part of the explanation is this. The trustworthiness or warrant-conferring-ness of intuitions is often, if not always, grounded in linguistic competence. If one assents to a sentence in a certain context strictly because one knows how to use a language, then, I think, we generally treat that assent as (prima facie, defeasibly) warranted. The idea might be that if pure knowledge of a language alone prompts assent, then that sentence must command assent in that context purely in virtue of linguistic rules or conventions; and whatever rules or conventions philosophers have to obey, the linguistic rules or conventions of the language of inquiry are surely among them. Whether or not this meta-explanation is right, I think we do, in fact, generally treat such assent as warranted.
But how does one know that the assent is grounded in linguistic competence alone, and not some other feature of the assenter’s cognitive make-up? One test is whether other speakers of the assenter’s language behave similarly. This will yield some false positives, though, since speakers of the assenter’s language have more in common with her, cognitively, than the language alone. This could be what grounds the assent, not linguistic competence, and there is no very compelling reason to think that just any shared cognitive trait will issue warranted pronouncements or steer us towards the true and away from the false.
Another test has fewer false positives. Specify all of the relevant facts that do not formally or pragmatically entail the claim of interest. Then, query the subject on the claim. If she intuitively assents or fails to assent, what could explain it? If we’ve specified the facts the right way, and made sure she understands them, then her assent could not be based on failure to believe the right facts. Perhaps she has failed to reason with the facts the right way? That is one explanation of her assent. But if we have no antecedent reason to doubt her reasoning skills, this is implausible. Perhaps her tacit linguistic knowledge disposes her to respond one way, but some consciously held theoretical commitment trumps this more trustworthy disposition. We should try to rule out this possibility somehow, if we want our intuitions to help us decide the claims they are about. If we do successfully rule it out, I can only see one other alternative explanation. This is that the subject's assent (or failure to assent) is grounded in her linguistic competence.
How does this explain that intuitions with narrower content are more trustworthy than intuitions with broader content? First, because it is easier to specify all of the relevant facts for a narrow claim than for a broader claim. It is easier to say everything about fake barn country, a certain heap of sand, or Twin Earth than it is to say everything about justified true belief, heaps (or, for that matter, vagueness in general), and semantic externalism. That is one thing our second test requires we do. Another thing is to make sure that no conscious theory is over-riding what her linguistic competence disposes the subject to do. Narrower claims meet this requirement more easily, since they are less likely to be obviously consistent or inconsistent with a high-level theory than broader claims at the theoretical level. Lastly, the test requires that we make sure that our subject has not failed to reason with the premises correctly. Since it is easier to reason with fewer premises, ceteris paribus, and we need to specify fewer relevant facts for narrower claims, narrower claims are more likely to pass our second test than broader claims.
Better to intuit that “water” doesn’t mean the same thing on Earth as on Twin Earth than to intuit semantic externalism. If we intuit the former, this is better evidence against semantic internalism than an intuition that semantic internalism is true would be evidence against it.
Better to intuit that n grains of sand is not a heap and that if n grains of sand is not a heap, then n + 1 grains of sand is not a heap than to intuit that there are no heaps. In this case, I think, our intuition that there are heaps trumps our intuitions of the premises. Perhaps the premises, combined, are “broader” or stronger than the intuition that there are heaps, since they are recursive, and so have an infinite number of instances. Perhaps that is because I already know that there are heaps on the basis, say, of my perceptual evidence and my intuitions that, in this or that case, what I am looking at is a heap.
Better to intuit that I don’t really know that that is the façade of a barn than to intuit that knowledge is not justified true belief. Sometimes I think that my intuition that knowledge is justified true belief trumps my intuition about fake barn country. Perhaps this is because I sometimes think that K = JTB has too many merits to give up so easily. Of course, most of the time, people take the intuition about fake barn country (along with intuitions about other Gettier cases) to trump K = JTB.
I think these examples show that, in general, narrower intuitions are better evidence for their claims than broader intuitions. They don’t show that a narrower intuition always trumps a conflicting broader intuition, since the broader intuition might have more going for it than the conflicting narrower intuition brings against it. They suggest that a narrower intuition trumps a conflicting broader intuition, ceteris paribus.
How do we explain all of this? Part of it is that a narrower claim just, in general, has more warrant or a higher probability, relative to some piece of evidence, than any broader claim (of which the narrower claim is “an instance”), relative to that same evidence. That is, if I see Mike park the car, that is better evidence that Mike knows how to park a car than it is evidence that everyone named “Mike” knows how to park a car, or that everyone Mike’s age knows how to park a car. So, my intuition that p is, in general, better evidence that p than it is evidence that (a stronger claim) p & q, and better evidence that p & q than it is evidence that (an even stronger claim) p & q & r, and so on. If my intuition is that all bachelors are unmarried men, that is better evidence that all bachelors are unmarried men than it is evidence that, say, all men are unmarried men, or that all bachelors are unmarried men and all male college students are unmarried men.
Part of it is that we get more disagreement with intuitions on broader claims. Since much of the time the conflicting intuitions of two individuals will carry just as much evidential weight, these sorts of disagreements don’t affect the evidential standing of the claims of interest one way or another. Intuitions about semantic internalism and semantic externalism won’t get us very far, because people’s intuitions differ and carry equal evidential weight.
But I think the biggest part of the explanation is this. The trustworthiness or warrant-conferring-ness of intuitions is often, if not always, grounded in linguistic competence. If one assents to a sentence in a certain context strictly because one knows how to use a language, then, I think, we generally treat that assent as (prima facie, defeasibly) warranted. The idea might be that if pure knowledge of a language alone prompts assent, then that sentence must command assent in that context purely in virtue of linguistic rules or conventions; and whatever rules or conventions philosophers have to obey, the linguistic rules or conventions of the language of inquiry are surely among them. Whether or not this meta-explanation is right, I think we do, in fact, generally treat such assent as warranted.
But how does one know that the assent is grounded in linguistic competence alone, and not some other feature of the assenter’s cognitive make-up? One test is whether other speakers of the assenter’s language behave similarly. This will yield some false positives, though, since speakers of the assenter’s language have more in common with her, cognitively, than the language alone. This could be what grounds the assent, not linguistic competence, and there is no very compelling reason to think that just any shared cognitive trait will issue warranted pronouncements or steer us towards the true and away from the false.
Another test has fewer false positives. Specify all of the relevant facts that do not formally or pragmatically entail the claim of interest. Then, query the subject on the claim. If she intuitively assents or fails to assent, what could explain it? If we’ve specified the facts the right way, and made sure she understands them, then her assent could not be based on failure to believe the right facts. Perhaps she has failed to reason with the facts the right way? That is one explanation of her assent. But if we have no antecedent reason to doubt her reasoning skills, this is implausible. Perhaps her tacit linguistic knowledge disposes her to respond one way, but some consciously held theoretical commitment trumps this more trustworthy disposition. We should try to rule out this possibility somehow, if we want our intuitions to help us decide the claims they are about. If we do successfully rule it out, I can only see one other alternative explanation. This is that the subject's assent (or failure to assent) is grounded in her linguistic competence.
How does this explain that intuitions with narrower content are more trustworthy than intuitions with broader content? First, because it is easier to specify all of the relevant facts for a narrow claim than for a broader claim. It is easier to say everything about fake barn country, a certain heap of sand, or Twin Earth than it is to say everything about justified true belief, heaps (or, for that matter, vagueness in general), and semantic externalism. That is one thing our second test requires we do. Another thing is to make sure that no conscious theory is over-riding what her linguistic competence disposes the subject to do. Narrower claims meet this requirement more easily, since they are less likely to be obviously consistent or inconsistent with a high-level theory than broader claims at the theoretical level. Lastly, the test requires that we make sure that our subject has not failed to reason with the premises correctly. Since it is easier to reason with fewer premises, ceteris paribus, and we need to specify fewer relevant facts for narrower claims, narrower claims are more likely to pass our second test than broader claims.
Tuesday, July 22, 2008
The Supposed Role of Ontological Expressions
What is the supposed role of ontological expressions, the role which is supposed to be preserved across alternative languages? Before we can sensibly ask whether different sets of ontological expressions play that role equally well we must be clear on what ontological expressions are supposed to do. “Be used for asking questions about what there is” does not suffice as an answer. (Eklund 2008, 30)
It seems to me that the roles of ontological expressions change considerably across languages. The role of the objectual existential quantifier is (sometimes) to say that a description is true of something without saying what that thing is. The role of the objectual universal quantifier is sometimes to say that a description is true of everything without having to say of each thing individually that the description is true of it. Other times, the role of the objectual universal quantifier is to say that a description is true of everything without having to take a position on what things there are. In the hands of one theorist, the role of the substitutional quantifiers, roughly, is to provide truth-conditions for sentences without saying what (features of) objects make the sentence true. In the hands of another theorist, the role of the substitutional quantifiers is to provide truth-conditions for referentially opaque sentences. The role of the various Meinongian ontological notions surrounding being and objects is, well, to state Meinong’s theory of being and objects.
The uses of ontological expressions differ from language to language and from theorist to theorist. As far as I can tell, the common role for ontological expressions in different languages in the hands of different theorists is just this: to do what they are supposed to do in the language according to the purposes of the theorist. This is not to say that just any expression can be ontological, because just any expression has this sort of role. The category of ontological expressions is not defined by the role its members play. I suggested a way of categorizing existence-like expressions in my last post.
Incidentally, I think what I’ve had to say has been more-or-less in line with Carnap. I don’t think Carnap would say that what ontologists (or, given his antipathy towards ontology so described, he might say “linguistically savvy post-ontological philosopher”) ought to be doing is finding the role that ontological expressions ought to play, and then creating languages in which the ontological expressions best fit that role. He said that theorists choose whole languages according to the uses to which they would like to put them. I think he would have said, if prompted, that there is no one use for language in general. I also think he would have said, if prompted, that, for that reason, there is no one use for the ontological apparatus of language in general.
Monday, July 21, 2008
Existence-Likeness
I would like to take a stab at defining “existence-like” as characterized in Eklund (2008).
An expression is existence-like if it is translated by “exists” in English.
For starters, this handles the cases of disputes about mereology. If L has some formal mereology, with formal criteria for when things have a fusion, and s is the sentence in L stating that there is no fusion that is a table of a certain number of particles "arranged table-wise", then I think we would translate s in English with something such as "There is no table composed of x, y, z particles." The quantifier in L is translated in English with a "there is" equivalent to "exists".
The definition also handles a number of cases related to deviant logics. Consider a schema with branching quantification such as:
For all w there is an x,
(1) such that F(w, x, y, z).*
for all y there is a z,
This might seem to present a problem, since different theorists will translate (1) different ways. Proponents of branching quantification – and especially proponents of branching quantification as a resource for schematizing the logical form of actual English sentences – will translate sentences of this form more-or-less homophonically (perhaps adding some punctuation or inflection marks). Proponents of classical FOL, such as Quine, will probably translate it with a sentence of the form:
(2) For all x there is an f, and for all y there is a g, such that F(x, f(x), y, g(y).
So a potentially existence-like expression, such as the branched existential quantifier in (1), will be translated multiple ways in English, depending on the syntactic and ontological commitments of the translator. But this does not seem actually to be a problem for our definition of “existence-like”, since both the classical logician and the proponent of branching quantification translate the “there is” of (1) with a “there is”. Where they differ is on the range of values of the existentially quantified variables.
Another problem is the theorist who thinks that “there is” and “exists” are not synonymous or equivalent, or that existential quantifications are not existence claims. This view can manifest itself in a number of ways. The most obvious case would be one in which a theorist uses a language in which the “existential” quantifiers are interpreted substitutionally and are intended to help regiment some fragment of English discourse containing “there is”, but no fragment containing “exists”. Given her preferred regimentation, and the existence of the name “Pegasus” in English, she might consider the following true:
(3) There is at least one winged horse.
The classical logician, who prefers to regiment “there is” discourse with objectual quantification alone, has a few options here. She can translate the substitutionally regimented counterpart of (3), and other sentences of similar logical form, just as (3), but with “there is” interpreted objectually. She can treat the translatum as either true or false depending on her preferred theory and interpretation of fictions and empty names in English. She can also employ semantic ascent, translating the substitutionally regimented counterpart of (3) as:
(4) “There is at least one winged horse” is true.
Again, she can evaluate (4) based on her preferred theory of truth. Perhaps she thinks “true” is implicitly relativized to something more precise than English; perhaps she thinks (3) isn’t truth-apt.
Now, in the case of (3), if the translation is interpreted objectually, then I think it is clear that the existential quantifier in the substitutionally regimented counterpart of (3) satisfies our definition of “existence-like”. Even if (3) is viewed as true, but elliptical (as required by certain theories of fiction or empty names), I can’t see how even the fully explicit interpretation could lack the expression “there is”. And if there is a “there is” in the fully explicit version of (3), I don’t see how we could deny that that “there is” is the translation of the existential quantifier in the substitutionally regimented counterpart of (3), as required by our definition of “existence-like”.
If we take the route of semantic ascent, things get a little weirder. After all “there is” does occur in (4), but it is mentioned, not used. Still, my intuition is that “there is” is the translation in (4) of the substitutionalist’s existential quantifier. One might want to say that “‘there is’”, and not “there is” is the translation, but “‘there is’” does not occur in (4), strictly speaking. (There is no close-quotation mark after the “there is” in (4).) Alternatively, assume that, if a word in a sentence is not treated by a translation as elliptical, or as an auxiliary term in a construction-yielding particle phrase, then part of the translation of the sentence is a translation of the word. Then, since the substitutionalist’s existential quantifier is not being treated as elliptical or auxiliary in the translation (4), there must be a translation in (4) of that quantifier. But that translation obviously has to be “there is”.
(Note that we encounter a similar sort of situation when the classical logician translates second-order quantification. If she doesn’t want to translate second-order quantification objectually into set theory, then she will most likely treat it metalinguistically. For instance, she might translate “There is a P and an x such that P(x)” as “There is a “P” such that ‘There is an x such that P(x)’ is true”. The second-order quantification over P becomes metalinguistic quantification over “P”.)
Recall that our substitutionalist distinguishes between “there is” and “exists” as between substitutional and objectual quantification. What happens when she wants to translate from another substitutional language into English? She will translate existential quantification with “there is”, but not with “exists”. We have seen that, in these cases, the classical logician, who treats “there is” and “exists” as equivalent, can use either of these translations. In this case, then, not only is it unclear what translatum sentence to use (which is not necessarily a problem for our definition of “existence-like”), it is also unclear whether the translatum should contain “exists” as the translation of a given non-English expression. Since the questions about whether and when to use substitutional or objectual quantification are fundamental ontological questions, and we are defining “existence-like” in order to help answer these questions, it would be silly to require that we settle the questions about whether and when to use substitutional or objectual quantification in order to apply our definition correctly. I think we need to modify the definition. The only modification I can think of that works (and is also the simplest) is this:
An expression is existence-like if it is translated either by “exists” or “there is” in English.
This solves our problem, since both the substitutionalist and the classical logician satisfy this definition. This also solves a similar problem, which we haven’t explored, for Meinongian non-English languages.
This solution might create a problem of its own, however. The substitutionalist and the Meinongian have sought to create an interesting distinction between “exists” and “there is”. To some extent, our modified definition erases that distinction. We just observed that we don’t want our definition to trivially settle ontological questions. Has our modified definition done just that?
I think not. The point of a definition of existence-likeness is to enable us to survey what meanings it is theoretically possible to assign to “exists” and “there is” when doing ontology. From a Carnapian point of view, we could say that the point was to find out what terms in what languages have semantic rules that we can use to explicate “exists” and “there is” in English. Our definition indicates that we can use (separately) the Meinongian and the substitutional rules for “there is”, as well as other rules. But surely if these rules determine existence-like uses for expressions, and the rules do not prohibit the use of other rules for distinct expressions (such as the Meinongian or objectual “exists”), then our definition does not prohibit the use of both these sorts of expressions to state an ontology. Simply because we could use the Meinongian rules for “there is” to explicate “exists” does not mean that we could not use distinct rules to explicate “exists” a different way in the same language. So our definition is not problematic for that sort of reason.
* - I'm having trouble writing branching quantifiers on Blogger. The two quantifier strings on the different lines are supposed to be on two different branches.
An expression is existence-like if it is translated by “exists” in English.
For starters, this handles the cases of disputes about mereology. If L has some formal mereology, with formal criteria for when things have a fusion, and s is the sentence in L stating that there is no fusion that is a table of a certain number of particles "arranged table-wise", then I think we would translate s in English with something such as "There is no table composed of x, y, z particles." The quantifier in L is translated in English with a "there is" equivalent to "exists".
The definition also handles a number of cases related to deviant logics. Consider a schema with branching quantification such as:
For all w there is an x,
(1) such that F(w, x, y, z).*
for all y there is a z,
This might seem to present a problem, since different theorists will translate (1) different ways. Proponents of branching quantification – and especially proponents of branching quantification as a resource for schematizing the logical form of actual English sentences – will translate sentences of this form more-or-less homophonically (perhaps adding some punctuation or inflection marks). Proponents of classical FOL, such as Quine, will probably translate it with a sentence of the form:
(2) For all x there is an f, and for all y there is a g, such that F(x, f(x), y, g(y).
So a potentially existence-like expression, such as the branched existential quantifier in (1), will be translated multiple ways in English, depending on the syntactic and ontological commitments of the translator. But this does not seem actually to be a problem for our definition of “existence-like”, since both the classical logician and the proponent of branching quantification translate the “there is” of (1) with a “there is”. Where they differ is on the range of values of the existentially quantified variables.
Another problem is the theorist who thinks that “there is” and “exists” are not synonymous or equivalent, or that existential quantifications are not existence claims. This view can manifest itself in a number of ways. The most obvious case would be one in which a theorist uses a language in which the “existential” quantifiers are interpreted substitutionally and are intended to help regiment some fragment of English discourse containing “there is”, but no fragment containing “exists”. Given her preferred regimentation, and the existence of the name “Pegasus” in English, she might consider the following true:
(3) There is at least one winged horse.
The classical logician, who prefers to regiment “there is” discourse with objectual quantification alone, has a few options here. She can translate the substitutionally regimented counterpart of (3), and other sentences of similar logical form, just as (3), but with “there is” interpreted objectually. She can treat the translatum as either true or false depending on her preferred theory and interpretation of fictions and empty names in English. She can also employ semantic ascent, translating the substitutionally regimented counterpart of (3) as:
(4) “There is at least one winged horse” is true.
Again, she can evaluate (4) based on her preferred theory of truth. Perhaps she thinks “true” is implicitly relativized to something more precise than English; perhaps she thinks (3) isn’t truth-apt.
Now, in the case of (3), if the translation is interpreted objectually, then I think it is clear that the existential quantifier in the substitutionally regimented counterpart of (3) satisfies our definition of “existence-like”. Even if (3) is viewed as true, but elliptical (as required by certain theories of fiction or empty names), I can’t see how even the fully explicit interpretation could lack the expression “there is”. And if there is a “there is” in the fully explicit version of (3), I don’t see how we could deny that that “there is” is the translation of the existential quantifier in the substitutionally regimented counterpart of (3), as required by our definition of “existence-like”.
If we take the route of semantic ascent, things get a little weirder. After all “there is” does occur in (4), but it is mentioned, not used. Still, my intuition is that “there is” is the translation in (4) of the substitutionalist’s existential quantifier. One might want to say that “‘there is’”, and not “there is” is the translation, but “‘there is’” does not occur in (4), strictly speaking. (There is no close-quotation mark after the “there is” in (4).) Alternatively, assume that, if a word in a sentence is not treated by a translation as elliptical, or as an auxiliary term in a construction-yielding particle phrase, then part of the translation of the sentence is a translation of the word. Then, since the substitutionalist’s existential quantifier is not being treated as elliptical or auxiliary in the translation (4), there must be a translation in (4) of that quantifier. But that translation obviously has to be “there is”.
(Note that we encounter a similar sort of situation when the classical logician translates second-order quantification. If she doesn’t want to translate second-order quantification objectually into set theory, then she will most likely treat it metalinguistically. For instance, she might translate “There is a P and an x such that P(x)” as “There is a “P” such that ‘There is an x such that P(x)’ is true”. The second-order quantification over P becomes metalinguistic quantification over “P”.)
Recall that our substitutionalist distinguishes between “there is” and “exists” as between substitutional and objectual quantification. What happens when she wants to translate from another substitutional language into English? She will translate existential quantification with “there is”, but not with “exists”. We have seen that, in these cases, the classical logician, who treats “there is” and “exists” as equivalent, can use either of these translations. In this case, then, not only is it unclear what translatum sentence to use (which is not necessarily a problem for our definition of “existence-like”), it is also unclear whether the translatum should contain “exists” as the translation of a given non-English expression. Since the questions about whether and when to use substitutional or objectual quantification are fundamental ontological questions, and we are defining “existence-like” in order to help answer these questions, it would be silly to require that we settle the questions about whether and when to use substitutional or objectual quantification in order to apply our definition correctly. I think we need to modify the definition. The only modification I can think of that works (and is also the simplest) is this:
An expression is existence-like if it is translated either by “exists” or “there is” in English.
This solves our problem, since both the substitutionalist and the classical logician satisfy this definition. This also solves a similar problem, which we haven’t explored, for Meinongian non-English languages.
This solution might create a problem of its own, however. The substitutionalist and the Meinongian have sought to create an interesting distinction between “exists” and “there is”. To some extent, our modified definition erases that distinction. We just observed that we don’t want our definition to trivially settle ontological questions. Has our modified definition done just that?
I think not. The point of a definition of existence-likeness is to enable us to survey what meanings it is theoretically possible to assign to “exists” and “there is” when doing ontology. From a Carnapian point of view, we could say that the point was to find out what terms in what languages have semantic rules that we can use to explicate “exists” and “there is” in English. Our definition indicates that we can use (separately) the Meinongian and the substitutional rules for “there is”, as well as other rules. But surely if these rules determine existence-like uses for expressions, and the rules do not prohibit the use of other rules for distinct expressions (such as the Meinongian or objectual “exists”), then our definition does not prohibit the use of both these sorts of expressions to state an ontology. Simply because we could use the Meinongian rules for “there is” to explicate “exists” does not mean that we could not use distinct rules to explicate “exists” a different way in the same language. So our definition is not problematic for that sort of reason.
* - I'm having trouble writing branching quantifiers on Blogger. The two quantifier strings on the different lines are supposed to be on two different branches.
Labels:
Carnap,
existence,
logic,
meta-ontology,
phil lang
Friday, July 18, 2008
Eklund on Carnap
I’m reading Matti Eklund’s really interesting paper “Carnapian Theses in Metaontology and Metaethics”, which raises some of the issues I've addressed recently.
A couple of things. Eklund claims on p. 6 that the internal/external distinction does not require the analytic/synthetic distinction. I disagree because I think a Carnapian should subscribe to something like either of the following arguments.
Suppose that analytic truths are sentences that are true in a language L solely in virtue of L. Every linguistic framework has semantic rules. Linguistic frameworks are languages or language-fragments. If something is true solely in virtue of semantic rules, it is true solely in virtue of language. For some language L, there is at least one sentence that is true in L solely in virtue of L’s semantic rules. Therefore, there is at least one analytic truth in at least one linguistic framework.
Alternatively, suppose that analytic truths are synonyms of logical truths. Every linguistic framework has semantic rules. The semantic rules of a linguistic framework entail all of the synonymy relations between sentences in that framework. For at least one linguistic framework L, at least one logical truth in L has a synonym in L. Therefore, there is at least one analytic truth (which is not just a logical truth) in at least one linguistic framework.
I am presupposing that Carnap is what Eklund calls a “language pluralist”, but I take that to be obvious. I am also presupposing that Carnap thinks true all of the premises of at least one of these arguments, but I think he does. I remember him advocating the view of analyticity in the second argument (Williamson calls it “Frege-analyticity”) somewhere in his correspondence with Quine.
One more thing. Eklund writes: “One less trivial claim would be that ‘there are
numbers’ has different meanings and truth-values in different languages while meaning what it actually means. But this is less trivial at the expense of being committing to some form of relativism, and language pluralism was supposed to be an alternative to relativism.” (11) This isn’t true, since one of the languages might be much more useful than all of the others. If someone can say that there is an objective fact to the effect that we ought to use a language that assigns a certain meaning and value to “there are numbers”, then it seems odd to call her a relativist.
A couple of things. Eklund claims on p. 6 that the internal/external distinction does not require the analytic/synthetic distinction. I disagree because I think a Carnapian should subscribe to something like either of the following arguments.
Suppose that analytic truths are sentences that are true in a language L solely in virtue of L. Every linguistic framework has semantic rules. Linguistic frameworks are languages or language-fragments. If something is true solely in virtue of semantic rules, it is true solely in virtue of language. For some language L, there is at least one sentence that is true in L solely in virtue of L’s semantic rules. Therefore, there is at least one analytic truth in at least one linguistic framework.
Alternatively, suppose that analytic truths are synonyms of logical truths. Every linguistic framework has semantic rules. The semantic rules of a linguistic framework entail all of the synonymy relations between sentences in that framework. For at least one linguistic framework L, at least one logical truth in L has a synonym in L. Therefore, there is at least one analytic truth (which is not just a logical truth) in at least one linguistic framework.
I am presupposing that Carnap is what Eklund calls a “language pluralist”, but I take that to be obvious. I am also presupposing that Carnap thinks true all of the premises of at least one of these arguments, but I think he does. I remember him advocating the view of analyticity in the second argument (Williamson calls it “Frege-analyticity”) somewhere in his correspondence with Quine.
One more thing. Eklund writes: “One less trivial claim would be that ‘there are
numbers’ has different meanings and truth-values in different languages while meaning what it actually means. But this is less trivial at the expense of being committing to some form of relativism, and language pluralism was supposed to be an alternative to relativism.” (11) This isn’t true, since one of the languages might be much more useful than all of the others. If someone can say that there is an objective fact to the effect that we ought to use a language that assigns a certain meaning and value to “there are numbers”, then it seems odd to call her a relativist.
For the Disappointed Theorist
One of the ways you know you are doing a science is when the data force you to change your theory.
— Daniel Kahneman, APS 2006
Mutatis mutandis, "science" and "philosophy".
— Daniel Kahneman, APS 2006
Mutatis mutandis, "science" and "philosophy".
Thursday, July 17, 2008
Is Anything Wrong With My Ontology of Languages?
The notion of language I've been using here makes unusually fine-grained distinctions. By what criteria do I individuate languages? This is still not very clear to me, and contains some theoretical presuppositions which I’d like to make more explicit. Maybe the presuppositions are inconsistent or involve factual error. A little help figuring out whether this is the case would be appreciated. Anyway - the presuppositions.
A language has a set of wffs (or perhaps a function that assigns degrees of well-formedness to strings), so that if s is well-formed (to degree n) in L1, and not in L2, then L1 is not the same language as L2.
A language can also have a logic. We might conceive of a logic as a class of "transformation rules" in Carnap's sense. If a sequence of strings proves s in L1, but not in L2, then L1 is not the same language as L2.
A language has a semantics. In some sense, perhaps involving no platonistic ontological commitments, a language assigns meanings to sentences, words, and phrases. If x has some semantic property relative to L1, but not relative to L2, then L1 is not the same language as L2.
The meaning of a word, whatever it is (or however it is to be eliminated in favor of some less committal idiom), is sometimes derived in part from the word’s role in some larger theory. “Electron” derives its meaning, in part, from its occurrence in a central chunk of physical theory. “Phlogiston” derives its meaning, in part, from its occurrence in a central chunk of phlogiston theory. Now, “electron” occurs both in
(E) Every electron has a charge of -1
and in
(E`) Every electron has a charge of -10.
“Phlogiston” occurs both in
(P) When a flammable substance is burned, phlogiston escapes it
and in
(P`) It is not the case that when a flammable substance is burned, phlogiston escapes it.
We might suppose that “electron” derives its meaning, in part, from its occurrence in (E) but not in (E`), and that “phlogiston” derives its meaning, in part, from its occurrence in (P) but not in (P`). So there must be some special property of (any combination of) “electron”, (E), and the languages containing them in virtue of which this is the case; likewise for (P) and “phlogiston”. It can’t just be that (E) is true and (E`) is not, because the same distinction can’t be made between (P) and (P`). I don’t think it can just be that (E) is actually in physical theory, and (P) in phlogiston theory. This is because I think theories are best thought of as classes of sentences. “Electron” occurs in both physical theory and physical` theory, which is the set of sentences gotten by replacing (E) with (E`) in physical theory; likewise for “phlogiston”. If physical theory is well-formed or expressible in a language, then physical` theory is almost certainly well-formed or expressible in that language; likewise for phlogiston theory and phlogiston` theory. There is certainly something special about physical theory and phlogiston theory, and it has to do with the meanings of “electron” and “phlogiston”, but it isn’t yet clear what this special something is.
My guess about how to make it clear is basically to individuate languages more finely. Assume that languages have unique assignments of truth-values to certain of their sentences. These include the sentences from which words derive their meanings. Words can derive their meanings only from sentences which are assigned a truth-value by a language. If L1 assigns a different truth-value to s than L2, and t derives its meaning in part from s, then t has a different meaning in L1 than in L2. The hypothesis is that “electron” actually has the meaning it does in the language of physical theory in part because that language assigns the value “true” to (E) but not to (E`), and “electron” is actually used in the language of physical theory, not the language of physical` theory.* We avoid the problems related to “phlogiston” in the following way. (P`) is not incoherent or analytically false in our language. Rather it is elliptical for a complicated statement about the sub-optimality of the language of phlogiston theory. This is the language which gave and continues to give “phlogiston” its meaning. Fully unpacked, (P`) might be glossed “The language of phlogiston theory is sub-optimal because there is no worthwhile concept to which (P) lends meaning.” “Phlogiston” means what it does, and is to be unpacked this way, because it was introduced in the language of phlogiston theory, and not, for instance, the language of phlogiston` theory.
I guess the idea overall is that theories in which theoretical terms derive their meaning, in part, from their occurrence in other sentences of the theory are, as Ayer might have said, disguised linguistic proposals. Or maybe the idea is that languages are disguised theoretical proposals.
I understand that the apparatus here involves a number of assumptions, but I’m ready to take them on unless they’re controverted by fact or internal inconsistency. Well - are they?
* - I say “language”, but I allow that physical theory – the class of strings assigned certain meanings – could be conducted simultaneously in many different languages. The actual meaning and use of “electron” and other special theoretical terms might not determine a single language for physical theory, in our sense of “language”.
_________________________
Update: At least one thing is wrong with this ontology. I think the epistemology of analyticity for which I have primarily made use of it is no good. Also, I don't like the ellipticality stuff. If semantic representations are mental representations, then this is way implausible.
A language has a set of wffs (or perhaps a function that assigns degrees of well-formedness to strings), so that if s is well-formed (to degree n) in L1, and not in L2, then L1 is not the same language as L2.
A language can also have a logic. We might conceive of a logic as a class of "transformation rules" in Carnap's sense. If a sequence of strings proves s in L1, but not in L2, then L1 is not the same language as L2.
A language has a semantics. In some sense, perhaps involving no platonistic ontological commitments, a language assigns meanings to sentences, words, and phrases. If x has some semantic property relative to L1, but not relative to L2, then L1 is not the same language as L2.
The meaning of a word, whatever it is (or however it is to be eliminated in favor of some less committal idiom), is sometimes derived in part from the word’s role in some larger theory. “Electron” derives its meaning, in part, from its occurrence in a central chunk of physical theory. “Phlogiston” derives its meaning, in part, from its occurrence in a central chunk of phlogiston theory. Now, “electron” occurs both in
(E) Every electron has a charge of -1
and in
(E`) Every electron has a charge of -10.
“Phlogiston” occurs both in
(P) When a flammable substance is burned, phlogiston escapes it
and in
(P`) It is not the case that when a flammable substance is burned, phlogiston escapes it.
We might suppose that “electron” derives its meaning, in part, from its occurrence in (E) but not in (E`), and that “phlogiston” derives its meaning, in part, from its occurrence in (P) but not in (P`). So there must be some special property of (any combination of) “electron”, (E), and the languages containing them in virtue of which this is the case; likewise for (P) and “phlogiston”. It can’t just be that (E) is true and (E`) is not, because the same distinction can’t be made between (P) and (P`). I don’t think it can just be that (E) is actually in physical theory, and (P) in phlogiston theory. This is because I think theories are best thought of as classes of sentences. “Electron” occurs in both physical theory and physical` theory, which is the set of sentences gotten by replacing (E) with (E`) in physical theory; likewise for “phlogiston”. If physical theory is well-formed or expressible in a language, then physical` theory is almost certainly well-formed or expressible in that language; likewise for phlogiston theory and phlogiston` theory. There is certainly something special about physical theory and phlogiston theory, and it has to do with the meanings of “electron” and “phlogiston”, but it isn’t yet clear what this special something is.
My guess about how to make it clear is basically to individuate languages more finely. Assume that languages have unique assignments of truth-values to certain of their sentences. These include the sentences from which words derive their meanings. Words can derive their meanings only from sentences which are assigned a truth-value by a language. If L1 assigns a different truth-value to s than L2, and t derives its meaning in part from s, then t has a different meaning in L1 than in L2. The hypothesis is that “electron” actually has the meaning it does in the language of physical theory in part because that language assigns the value “true” to (E) but not to (E`), and “electron” is actually used in the language of physical theory, not the language of physical` theory.* We avoid the problems related to “phlogiston” in the following way. (P`) is not incoherent or analytically false in our language. Rather it is elliptical for a complicated statement about the sub-optimality of the language of phlogiston theory. This is the language which gave and continues to give “phlogiston” its meaning. Fully unpacked, (P`) might be glossed “The language of phlogiston theory is sub-optimal because there is no worthwhile concept to which (P) lends meaning.” “Phlogiston” means what it does, and is to be unpacked this way, because it was introduced in the language of phlogiston theory, and not, for instance, the language of phlogiston` theory.
I guess the idea overall is that theories in which theoretical terms derive their meaning, in part, from their occurrence in other sentences of the theory are, as Ayer might have said, disguised linguistic proposals. Or maybe the idea is that languages are disguised theoretical proposals.
I understand that the apparatus here involves a number of assumptions, but I’m ready to take them on unless they’re controverted by fact or internal inconsistency. Well - are they?
* - I say “language”, but I allow that physical theory – the class of strings assigned certain meanings – could be conducted simultaneously in many different languages. The actual meaning and use of “electron” and other special theoretical terms might not determine a single language for physical theory, in our sense of “language”.
_________________________
Update: At least one thing is wrong with this ontology. I think the epistemology of analyticity for which I have primarily made use of it is no good. Also, I don't like the ellipticality stuff. If semantic representations are mental representations, then this is way implausible.
Wednesday, July 16, 2008
Maybe history and the news make us seem like assholes...
...because all the nice things we do aren't worth telling each other about.
Even More Comments on The Philosophy of Philosophy
I see something wrong with my previous response to Williamson. The main point of Williamson’s (and our own) discussion of the possible varieties of epistemological analyticity is to explain how we can legitimately do philosophy from the armchair. To avoid one of Williamson’s objections to certain forms of epistemological analyticity, I suggested that understanding and truth should be relativized to languages in such a way that the discarded theoretical terms of discarded theories are not in the language of an up-to-date theorist. There might be analytic truths about phlogiston, and we might have to assent to or know their truth in order for us to understand (some) sentences containing “phlogiston”, but the relevant sort of truth and understanding in play here should be relativized to a language not used in up-to-date theory.
But aren’t philosophers sometimes concerned to know from the armchair the propositional contents of non-metalinguistic sentences, rather than their metalinguistic counterparts? That is, aren’t philosophers sometimes concerned to know, say, that every vixen is a female fox, not just that “Every vixen is a female fox” is true in the language of modern zoology? Knowing that a sentence is true in some language certainly isn’t sufficient for knowing its content, otherwise we would know, for instance, that phlogiston has a real functional role in combustion, and that the sun will rise tomorrow tonk the sun is purple.* Perhaps we might say that we know the content of “Every vixen is a female fox” if we know that it is analytic in our own zoological language and that we ought to be using our own zoological language. Or, to be a little more cautious, we might say that we know the content of "Every vixen is a female fox" if we know that it is analytic in every zoological language L such that (a) we know that L is maximally useful for us and (b) we know that the translation of "Every vixen is a female fox" in L is analytic. But certainly we don’t know what sort of zoological language we ought to be using from the armchair. For instance, we don’t know from the armchair whether the terms “vixen” or “fox” mark useful distinctions.
Still, as long as we understand a chunk of modern zoological theory in its customary linguistic guise, then we are in a position to know from the armchair what language that chunk of zoological theory does use, or what (slightly formalized) language or languages it can be taken to be using, or what language or languages it can be rationally reconstructed as using. This knowledge is a priori in the sense that it is guaranteed by our linguistic competence alone. And, for each of those languages, we are in a position to know from the armchair what the analytic truths are, construed epistemologically. At any rate, if I’m right about how to relativize truth and understanding to languages, then Williamson hasn’t shown that we aren’t in a position to know from the armchair what the analytic truths are, construed epistemologically. If philosophers are sometimes concerned to know from the armchair the propositional contents of non-metalinguistic analytic sentences, then they’re in for a disappointment. But maybe they'll do just fine if they set their sights a little bit lower.
This was Michael Friedman’s idea with the “relativized a priori”, right? Or, if Friedman was right, then this was all Carnap’s idea.
_____________________
* - Williamson makes roughly this point himself.
But aren’t philosophers sometimes concerned to know from the armchair the propositional contents of non-metalinguistic sentences, rather than their metalinguistic counterparts? That is, aren’t philosophers sometimes concerned to know, say, that every vixen is a female fox, not just that “Every vixen is a female fox” is true in the language of modern zoology? Knowing that a sentence is true in some language certainly isn’t sufficient for knowing its content, otherwise we would know, for instance, that phlogiston has a real functional role in combustion, and that the sun will rise tomorrow tonk the sun is purple.* Perhaps we might say that we know the content of “Every vixen is a female fox” if we know that it is analytic in our own zoological language and that we ought to be using our own zoological language. Or, to be a little more cautious, we might say that we know the content of "Every vixen is a female fox" if we know that it is analytic in every zoological language L such that (a) we know that L is maximally useful for us and (b) we know that the translation of "Every vixen is a female fox" in L is analytic. But certainly we don’t know what sort of zoological language we ought to be using from the armchair. For instance, we don’t know from the armchair whether the terms “vixen” or “fox” mark useful distinctions.
Still, as long as we understand a chunk of modern zoological theory in its customary linguistic guise, then we are in a position to know from the armchair what language that chunk of zoological theory does use, or what (slightly formalized) language or languages it can be taken to be using, or what language or languages it can be rationally reconstructed as using. This knowledge is a priori in the sense that it is guaranteed by our linguistic competence alone. And, for each of those languages, we are in a position to know from the armchair what the analytic truths are, construed epistemologically. At any rate, if I’m right about how to relativize truth and understanding to languages, then Williamson hasn’t shown that we aren’t in a position to know from the armchair what the analytic truths are, construed epistemologically. If philosophers are sometimes concerned to know from the armchair the propositional contents of non-metalinguistic analytic sentences, then they’re in for a disappointment. But maybe they'll do just fine if they set their sights a little bit lower.
This was Michael Friedman’s idea with the “relativized a priori”, right? Or, if Friedman was right, then this was all Carnap’s idea.
_____________________
* - Williamson makes roughly this point himself.
I understand “Every vixen is a female fox” and it has some positive epistemic status for me. How does it get that status? … The lazy theorist may try to dismiss the question, saying that it is simply part of our linguistic practice that “Every vixen is a female fox” has positive epistemic status for whoever understands it. But the examples of defective practices [surrounding “phlogiston”, “tonk”, racial pejoratives, and so on] show that it is not simply up to linguistic practices to distribute positive epistemic status as they please. That the practice is to treat some given sentence as having positive epistemic status for competent speakers of the language does not imply that it really has that epistemic status for them. (The Philosophy of Philosophy, 84)
Monday, July 14, 2008
Papers
I've started posting my papers on the Google Docs website.
The first paper, "Playing Characters: Towards a Theory of Video Game Role-Playing", is my would-be contribution to the upcoming Final Fantasy and Philosophy anthology. I argue, for a non-professional audience, that role-playing a character in a video game requires a sort of psychological connection of which certain forms of empathy are instances. I also speculate about the general aesthetic characteristics of role-playing video games and the relation between role-playing and real life. It was rejected because it was submitted late and other papers in the anthology address similar questions. It is easily the silliest paper I have ever written.
The second paper, "The Concept of Cognitive Meaningfulness", was my undergraduate thesis in Philosophy at Bard College. I discuss the origins of the concept and criticize various ways of explicating it. It's the only paper I know of just about the concept of cognitive meaningfulness since Part II of Israel Scheffler's Anatomy of Inquiry. It's not perfect, but I still like it.
The first paper, "Playing Characters: Towards a Theory of Video Game Role-Playing", is my would-be contribution to the upcoming Final Fantasy and Philosophy anthology. I argue, for a non-professional audience, that role-playing a character in a video game requires a sort of psychological connection of which certain forms of empathy are instances. I also speculate about the general aesthetic characteristics of role-playing video games and the relation between role-playing and real life. It was rejected because it was submitted late and other papers in the anthology address similar questions. It is easily the silliest paper I have ever written.
The second paper, "The Concept of Cognitive Meaningfulness", was my undergraduate thesis in Philosophy at Bard College. I discuss the origins of the concept and criticize various ways of explicating it. It's the only paper I know of just about the concept of cognitive meaningfulness since Part II of Israel Scheffler's Anatomy of Inquiry. It's not perfect, but I still like it.
How to Approach a Linguistic Item
When I encounter a word, phrase, or grammatical construction in need of philosophical explanation or clarification, what do I do? Sometimes the item derives its interest from its relation to something I already find interesting, but sometimes the interest is somehow intrinsic to the item itself. I’m more interested in what I do when the latter happens. What are the questions I have to bear in mind on my first encounter, as a philosopher, with a linguistic item of self-luminescent interest?
First, I think of examples. I look for seemingly typical or non-distinctive instances of the item. I try to situate it in a handful of different sentential and pragmatic contexts.
The next question I ask is: What is useful about this item? I think there are two ways of approaching this from the armchair. The first is to ask how things change after a sentence containing the item of interest is uttered in the intuitively typical or non-distinctive pragmatic contexts. What do I imagine would happen after the utterance? How might an interlocutor respond? What is true now of speaker and listener that was not true before?
The second approach is to ask what we would lose if we were not to allow this (or any heteronymous substitute) into our speech. On this approach, what I imagine is more a whole linguistic and theoretical world than a set of particular speech situations. For instance, would we fail to mark an important distinction? Would we lose some pragmatic tool, some ability (very broadly speaking) to change the social status of people or things? Would we not be able to express an interesting theory?
The first approach to the question of usefulness tells us what actual linguistic facts there are for an analysis or explication of the item to capture. The second approach tells us why it is worth capturing it. Both of these approaches are profitably initiated and conducted from the armchair, but both are also susceptible to experimental and observational test.
Is this it? What am I missing?
First, I think of examples. I look for seemingly typical or non-distinctive instances of the item. I try to situate it in a handful of different sentential and pragmatic contexts.
The next question I ask is: What is useful about this item? I think there are two ways of approaching this from the armchair. The first is to ask how things change after a sentence containing the item of interest is uttered in the intuitively typical or non-distinctive pragmatic contexts. What do I imagine would happen after the utterance? How might an interlocutor respond? What is true now of speaker and listener that was not true before?
The second approach is to ask what we would lose if we were not to allow this (or any heteronymous substitute) into our speech. On this approach, what I imagine is more a whole linguistic and theoretical world than a set of particular speech situations. For instance, would we fail to mark an important distinction? Would we lose some pragmatic tool, some ability (very broadly speaking) to change the social status of people or things? Would we not be able to express an interesting theory?
The first approach to the question of usefulness tells us what actual linguistic facts there are for an analysis or explication of the item to capture. The second approach tells us why it is worth capturing it. Both of these approaches are profitably initiated and conducted from the armchair, but both are also susceptible to experimental and observational test.
Is this it? What am I missing?
The Second Maxim
I feel as if, when philosophers attempt to rationally reconstruct or formalize or explicate some fragment of discourse, we should construe things so that as few (kinds of) sentences are truth-apt as possible. If the first maxim of scientific philosophizing is always to replace undefined primitives with logical constructs, the second maxim is to construe as few utterances as truth-apt assertions as possible. If we are concerned to reconstruct the most parsimonious theory of the world from some corpus of verbal behavior (and natural knowledge), then as little of the behavior should constitute endorsement of part of a theory as possible. Suppose the expressivist meta-ethical theory provides us with a range of terms and constructions (that do not yield truth-apt, assertive sentences) with which we might replace the rationally defensible bits of our current ethical discourse. The non-reductive realist meta-ethical theory provides us with a range of terms and constructions (that do yield truth-apt, assertive sentences) with which we might do the same. If both theories account equally well for the verbal behavior, don’t we have to endorse the expressivist theory? Isn’t the second maxim the reason why?
Sunday, July 13, 2008
More Comments on The Philosophy of Philosophy
In Chapter 4 of The Philosophy of Philosophy, “Epistemological Conceptions of Analyticity”, Williamson argues against epistemologies of analytic truths based on epistemological conceptions of analytic truths, which, in turn, are based on “understanding-assent” links. An understanding-assent link is a sentence like:
(1) Necessarily, whoever understands “All bachelors are unmarried” will assent to it.
I think the idea is that understanding links hold between object-level sentences corresponding to metalinguistic semantic facts, on the one hand, and assent to the content of the object-level sentences, on the other. If an understanding-assent link like (1) is supposed to provide the basis on which I know that “All bachelors are unmarried” is true, then this could be on the basis of a corresponding understanding-knowledge link:
(2) Necessarily, whoever understands “All bachelors are unmarried” knows that it is true.
Knowledge is factive. So, (2) entails:
(3) Necessarily, someone understands “All bachelors are unmarried” only if it is true.
But then, Williamson observes, it is unclear how to proceed with understanding-assent links related to several sorts of terms. I’ll just deal with “phlogiston” and other special theoretical terms from discredited theories, although what I have to say applies to the other sorts of terms he talks about.
It seems that fans of understanding-assent links will have to say that there are some even for “phlogiston”. Intuitively, part of the meaning of "phlogiston" is captured by its role in phlogiston theory. This commits us to:
(4) Necessarily, whoever understands “Phlogiston has the role R” will assent to it.
But phlogiston does not have the role R, because nothing plays the role that phlogiston plays in phlogiston theory. However, it follows from (4) that whoever does not assent to “Phlogiston has the role R” doesn’t understand it. Then it follows that people who think that nothing has role R don’t understand “Phlogiston has the role R”. Intuitively, this is not the case. So fans of understanding-assent links will have to accept something that is intuitively not the case.
I’m sure there are all sorts of ways around the problem Williamson is trying to set up for epistemologies of analytic truths based on understanding-assent links, but I’d like to propose just one.
First, excluding bizarre cases involving private codes, we understand sentences only relative to languages (or, if you like, idiolects). Second, sentences have their truth-values relative to languages (or idiolects). Assume, contrary to Williamson, that assent is generally metalinguistic – to assent to “All bachelors are unmarried” is, generally, to assent to the metalinguistic fact that “All bachelors are unmarried” is true, not to assent the corresponding object-level fact that all bachelors are unmarried. So, fully unpacked, (4) means something like:
(4`) Necessarily, whoever understands “Phlogiston has the role R” in L will assent to the metalinguistic fact that “Phlogiston has the role R” is true in L.
In a theoretically important sense, phlogiston theory is stated in a different language from our current chemical theory. (I think this is the sense in which languages classically do not contain their own truth predicate, and probably the sense in which sentences of a language have determinate logical forms.) When doing chemical theory today, we do not speak the L mentioned in (4`). We are not speaking L even when we say that phlogiston theory is false, or that nothing has the role R. If a fully-spelled out description of R invokes other special theoretical terms of phlogiston theory, perhaps a good interpretation of “Nothing has the role R” is “We should not speak a language in which a sentence of the form ‘x has the role R’ is true, for some name substituted for ‘x’.”
The problem with words like “phlogiston” and “tonk” (and, an eliminativist might say, “believes”) is that it is a bad idea to use them at all (except, perhaps, to say something like that phlogiston doesn’t exist). It is a bad idea to use them because it is a bad idea to use languages that countenance them - languages which commit us to parts of phlogiston theory, or in which everything is true or nothing is true. Still, I think we can endorse understanding-assent links for even these sorts of words by relativizing the understanding and the assent in question to languages that we do not use.
File under: Applied Carnap.
________________
Update: This is a pretty bad objection, now that I look at it again. That's because (4`) is false. If people don't know what language L is - and most people don't, on the ontology of languages necessary to make this work - then they won't (or shouldn't) assent to statements mentioning it, including (4`). And the business about the ellipticality of "Nothing has the role R" seems pretty implausible.
(1) Necessarily, whoever understands “All bachelors are unmarried” will assent to it.
I think the idea is that understanding links hold between object-level sentences corresponding to metalinguistic semantic facts, on the one hand, and assent to the content of the object-level sentences, on the other. If an understanding-assent link like (1) is supposed to provide the basis on which I know that “All bachelors are unmarried” is true, then this could be on the basis of a corresponding understanding-knowledge link:
(2) Necessarily, whoever understands “All bachelors are unmarried” knows that it is true.
Knowledge is factive. So, (2) entails:
(3) Necessarily, someone understands “All bachelors are unmarried” only if it is true.
But then, Williamson observes, it is unclear how to proceed with understanding-assent links related to several sorts of terms. I’ll just deal with “phlogiston” and other special theoretical terms from discredited theories, although what I have to say applies to the other sorts of terms he talks about.
It seems that fans of understanding-assent links will have to say that there are some even for “phlogiston”. Intuitively, part of the meaning of "phlogiston" is captured by its role in phlogiston theory. This commits us to:
(4) Necessarily, whoever understands “Phlogiston has the role R” will assent to it.
But phlogiston does not have the role R, because nothing plays the role that phlogiston plays in phlogiston theory. However, it follows from (4) that whoever does not assent to “Phlogiston has the role R” doesn’t understand it. Then it follows that people who think that nothing has role R don’t understand “Phlogiston has the role R”. Intuitively, this is not the case. So fans of understanding-assent links will have to accept something that is intuitively not the case.
I’m sure there are all sorts of ways around the problem Williamson is trying to set up for epistemologies of analytic truths based on understanding-assent links, but I’d like to propose just one.
First, excluding bizarre cases involving private codes, we understand sentences only relative to languages (or, if you like, idiolects). Second, sentences have their truth-values relative to languages (or idiolects). Assume, contrary to Williamson, that assent is generally metalinguistic – to assent to “All bachelors are unmarried” is, generally, to assent to the metalinguistic fact that “All bachelors are unmarried” is true, not to assent the corresponding object-level fact that all bachelors are unmarried. So, fully unpacked, (4) means something like:
(4`) Necessarily, whoever understands “Phlogiston has the role R” in L will assent to the metalinguistic fact that “Phlogiston has the role R” is true in L.
In a theoretically important sense, phlogiston theory is stated in a different language from our current chemical theory. (I think this is the sense in which languages classically do not contain their own truth predicate, and probably the sense in which sentences of a language have determinate logical forms.) When doing chemical theory today, we do not speak the L mentioned in (4`). We are not speaking L even when we say that phlogiston theory is false, or that nothing has the role R. If a fully-spelled out description of R invokes other special theoretical terms of phlogiston theory, perhaps a good interpretation of “Nothing has the role R” is “We should not speak a language in which a sentence of the form ‘x has the role R’ is true, for some name substituted for ‘x’.”
The problem with words like “phlogiston” and “tonk” (and, an eliminativist might say, “believes”) is that it is a bad idea to use them at all (except, perhaps, to say something like that phlogiston doesn’t exist). It is a bad idea to use them because it is a bad idea to use languages that countenance them - languages which commit us to parts of phlogiston theory, or in which everything is true or nothing is true. Still, I think we can endorse understanding-assent links for even these sorts of words by relativizing the understanding and the assent in question to languages that we do not use.
File under: Applied Carnap.
________________
Update: This is a pretty bad objection, now that I look at it again. That's because (4`) is false. If people don't know what language L is - and most people don't, on the ontology of languages necessary to make this work - then they won't (or shouldn't) assent to statements mentioning it, including (4`). And the business about the ellipticality of "Nothing has the role R" seems pretty implausible.
Friday, July 11, 2008
Blegs
In ordinary second-order logics, is the first-order fragment of the logic complete? That is, are all propositions that are true in all models and expressible strictly in terms of first-order quantification also provable?
Also, intuitively, when one plays a role-playing video game such as Final Fantasy, Breath of Fire, or Dragon Warrior, does one pretend to be the character(s) one plays?
Also, intuitively, when one plays a role-playing video game such as Final Fantasy, Breath of Fire, or Dragon Warrior, does one pretend to be the character(s) one plays?
Wednesday, July 9, 2008
Comments on The Philosophy of Philosophy
In Chapter 2 of The Philosophy of Philosophy, “Taking Philosophical Questions at Face Value”, Timothy Williamson argues that a certain philosophical or “proto-philosophical” question is not, explicitly or implicitly, about language. This, what he calls the original question, is: “Was Mars always either dry or not dry?” He shows how a number of ways of answering the original question, through the consideration of intuitionistic, three-valued, and fuzzy logics, still don’t make it a linguistic question, since the answers are not about (i.e. don’t refer to) linguistic items. The answers are “Mars was always either dry or not dry”, “Mars was not always either dry or not dry”, and “It is indefinite whether Mars was always either dry or not dry.” Since none of these answers is about language, the question is not a question about language.
Let’s focus on yes-no questions for now. Say that A is a straightforward answer to a yes-no question Q, stated in language L, iff A is stated in L and expresses what “yes” would express or expresses what “no” would express. Clearly, not every non-straightforward answer to a question is about language. If Susan asks “You ate lunch at Vinny’s last night?”, and Tim responds “Actually, I went to Aunt Suzie’s”, Tim does not give a straightforward answer, but neither does he give a linguistic answer. Nor is every linguistic answer non-straightforward. If Tim responded “True” – as in “What you just said is true, stated indicatively” – I think the answer is both straightforward, because equivalent to “yes”, and linguistic, because about a sentence. Still, most linguistic answers are non-straightforward. If Tim responded “Depends on what you mean by ‘lunch’”, because, say, he ate a borderline meal of salad and an omelet at 11:45, that would be a typical non-straightforward, linguistic answer.
Note also that linguistic questions – questions about language – admit of non-straightforward and perhaps also straightforward non-linguistic answers.
The point is that questions that are about language admit of non-linguistic answers, and questions that aren’t about language admit of linguistic answers.
One way for a question to be implicitly, but not explicitly, about language, relative to a kind of answer K, is for all of the members of K to be explicitly about language. Williamson has shown that the original question is not in this way implicitly about language, relative to its philosophical answers, since the philosophical answers are not explicitly about language. But might the question be implicitly about language because the philosophical answers are implicitly about language? I kinda think so, for the following reasons.
(1) and (3) are obvious. Although I’m not sure Quine would agree with me on (2), I think Williamson would. (4) is probably the most controversial, but I take it that Williamson should agree with me on that as well, judging by what he has to say about Vann McGee in his paper “Understanding and Inference.” But (6) straightforwardly follows from (1)-(4).
Now, once we get to (6), it’s not obvious that every answer in L1 to a question in some other language L2 is thereby a linguistic answer. After all, if a bilingual speaker asks me how the weather is in English, and I answer “Hace fresco”, I have not thereby given a linguistic answer. But, I want to say, that is because it was merely a manner of speaking for me to answer in Spanish. The philosopher who answers in a three-valued language, or a fuzzy language, or an intuitionistic language, or a classical language thinks she has to answer in that language, because that is the right language in which to answer the question, or the only (kind of) language in which to state her theory of vagueness, then the answer is not merely a manner of speaking. The step of translation from the logical language to natural English is a necessary step for the philosopher to give the sort of answer she wants to give. I want to say that it is in virtue of this necessity that the original question is linguistic, at least relative to these sorts of answers grounded in logical metareflection.
I think it is reasonable to say that there is a sense in which a question is implicitly about language, relative to a kind of answer K, iff every member of K is in another language because it must, for the speaker’s most cherished purposes, be in another language. So it is, apparently, with the original question and its philosophical-type answers – or at least the original question and the philosophical-type answers that Williamson has on offer. I guess that if the deconstructionist wants to say (in English) that Mars was always both dry and not dry, because binary distinctions are always unstable and every inscription of both “Mars has always been dry” and “Mars has always not been dry” is internally contradictory, then we have a philosophical English-language answer to our English-language question. But that’s not the kind of philosophy we were talking about, right? Weren’t we looking for the right philosophy of analytic philosophy?
Let’s focus on yes-no questions for now. Say that A is a straightforward answer to a yes-no question Q, stated in language L, iff A is stated in L and expresses what “yes” would express or expresses what “no” would express. Clearly, not every non-straightforward answer to a question is about language. If Susan asks “You ate lunch at Vinny’s last night?”, and Tim responds “Actually, I went to Aunt Suzie’s”, Tim does not give a straightforward answer, but neither does he give a linguistic answer. Nor is every linguistic answer non-straightforward. If Tim responded “True” – as in “What you just said is true, stated indicatively” – I think the answer is both straightforward, because equivalent to “yes”, and linguistic, because about a sentence. Still, most linguistic answers are non-straightforward. If Tim responded “Depends on what you mean by ‘lunch’”, because, say, he ate a borderline meal of salad and an omelet at 11:45, that would be a typical non-straightforward, linguistic answer.
Note also that linguistic questions – questions about language – admit of non-straightforward and perhaps also straightforward non-linguistic answers.
Tim: “… but then I had to jet to the supermarket.”Tim’s first answer might be straightforward. His second answer is non-straightforward. Neither answer is about language.
Susan: “What does ‘jet’ mean?”
Tim: “Somebody jets somewhere whenever they try to get there very quickly.”
Or
Tim: “I had to get there very quickly.”
The point is that questions that are about language admit of non-linguistic answers, and questions that aren’t about language admit of linguistic answers.
One way for a question to be implicitly, but not explicitly, about language, relative to a kind of answer K, is for all of the members of K to be explicitly about language. Williamson has shown that the original question is not in this way implicitly about language, relative to its philosophical answers, since the philosophical answers are not explicitly about language. But might the question be implicitly about language because the philosophical answers are implicitly about language? I kinda think so, for the following reasons.
1) The original question is stated in English.
2) Languages are partially constituted by their logics. Two things with different logics cannot be the same language.
3) The language of each answer has some formal logic – three-valued, fuzzy, intuitionistic, classical, etc.
4) English has no formal logic – neither three-valued, nor fuzzy, nor intuitionistic, nor classical, etc.
5) Therefore, English does not have the same logic as the language of any of the answers.
6) Therefore, the language of each of the answers is not the same as the language of the original question.
(1) and (3) are obvious. Although I’m not sure Quine would agree with me on (2), I think Williamson would. (4) is probably the most controversial, but I take it that Williamson should agree with me on that as well, judging by what he has to say about Vann McGee in his paper “Understanding and Inference.” But (6) straightforwardly follows from (1)-(4).
Now, once we get to (6), it’s not obvious that every answer in L1 to a question in some other language L2 is thereby a linguistic answer. After all, if a bilingual speaker asks me how the weather is in English, and I answer “Hace fresco”, I have not thereby given a linguistic answer. But, I want to say, that is because it was merely a manner of speaking for me to answer in Spanish. The philosopher who answers in a three-valued language, or a fuzzy language, or an intuitionistic language, or a classical language thinks she has to answer in that language, because that is the right language in which to answer the question, or the only (kind of) language in which to state her theory of vagueness, then the answer is not merely a manner of speaking. The step of translation from the logical language to natural English is a necessary step for the philosopher to give the sort of answer she wants to give. I want to say that it is in virtue of this necessity that the original question is linguistic, at least relative to these sorts of answers grounded in logical metareflection.
I think it is reasonable to say that there is a sense in which a question is implicitly about language, relative to a kind of answer K, iff every member of K is in another language because it must, for the speaker’s most cherished purposes, be in another language. So it is, apparently, with the original question and its philosophical-type answers – or at least the original question and the philosophical-type answers that Williamson has on offer. I guess that if the deconstructionist wants to say (in English) that Mars was always both dry and not dry, because binary distinctions are always unstable and every inscription of both “Mars has always been dry” and “Mars has always not been dry” is internally contradictory, then we have a philosophical English-language answer to our English-language question. But that’s not the kind of philosophy we were talking about, right? Weren’t we looking for the right philosophy of analytic philosophy?
Monday, June 30, 2008
What Grammatical Structures Say and the Linguistic Theory of Logical Truth
In the last chapter of Philosophy of Logic, Quine discusses the “linguistic theory of logical truth.” This is the theory that “[a] sentence is logically true if true by virtue purely of its grammatical structure. […] It is language that makes logical truths true – purely language, and nothing to do with the nature of the world.” (2nd ed., 95) Quine offers a few different reasons not to buy into this doctrine, most of them familiar from “Two Dogmas” and “Truth by Convention”. The freshest argument, I think, is the one he offers in the paragraph immediately following the previous quote. Here it is, in full:
Logical truths are about the world, or are true because they “reflect” features of the world, because their grammatical structures are about the world or reflect features of the world. In what sense could a grammatical structure possibly be about the world, say anything about the world, or “reflect” features of the world? First, we should note that grammatical structures are not about the world in the same way that sentences, names, or predicates are. Grammatical structures as such aren’t true or false like (truth-apt) sentences. By all appearances, grammatical structures don’t refer to anything in the world; Tarski’s definition of truth gets along just fine without assigning semantic values to grammatical structures. Nor do they have (Fregean) senses on any theory that I know of. Nor, intuitively, are they meaningful. If a person were to speak or write down a grammatical structure – say, by speaking or writing a sequence of particles and schematic variables for grammatical categories – I can’t see why anyone would want to say that she, or her utterance or inscription, meant anything.
We might want to say that grammatical structures say something about the world in a different sense – viz., in the sense that sentences with the same non-logical constants but different grammatical structures have different truth-conditions. “(Ax)(Cat(x))” says something different from “~(Ax)(Cat(x))” because of the difference in grammatical structure between the two. We might say that a negation symbol says that the negated sentence is false, a universal quantifier over a variable says that the sentence in the scope of the quantifier is true for all values of the bound variable, and so on.* In this way, by specifying what all of the particles or logical constants say, we can state more or less precisely what an entire grammatical structure, paired with a particular sentence instantiating it, says about the world. But two observations are in order. First, it is not clear how we should construe what the grammatical structures of atomic sentences say.** Second, and more importantly, the worldliness of a grammatical structure, in this sense, is dependent on the worldliness of the non-logical constants in the sentence instantiating it. For instance, the grammatical structure of “~(Cat(Dora))” says something about the world because “(Cat(Dora))” says something about the world – it is either true or false depending on the actual features of the thing called “Dora”. The grammatical structures of uninterpreted schemata say nothing about the world, because the talk of truth, falsity, and values of variables in our sketchy specification of what the particles say presupposes an interpretation of the lexical items in the sentence. When we admit, with Quine, that a logical truth “admittedly depends upon none of those features of the world that are reflected in lexical distinctions”, then the grammatical structure of a logical truth cannot derive its worldliness from the worldliness of the “lexical distinctions” marked by the non-logical constants in a sentence instantiating the structure. Briefly, since the worldliness of grammatical structures derives from the worldliness of the terms in the sentences instantiating them, and since the putative worldliness of logical truths does not depend on the worldliness of these terms, the grammatical structures of logical truths have nothing from which to derive their putative worldliness.
There might be some way of being about the world or reflecting features of the world that I haven’t grasped yet. Perhaps we should understand Quine as saying that we might as well postulate a sui generis mode of being about the world specific to grammatical structures. I can only say in response to this that we might as well not, both for the sake of not multiplying features of the world beyond necessity and for the sake of keeping “about the world” intelligible. Lastly, someone might say that the grammatical structure of logical truths such as “it is raining or it isn’t raining” is about the world because it reflects the fact about the world that things, in general, are or aren’t the case. But this begs the question against the proponent of the linguistic theory of logical truth. What is at issue is whether this fact is about the world.
* - It is much easier to fill in the “and so on” for a formalized language than for a natural language. What does a particle like “if” say? It seems we need a worked-out semantics for conditionals to fill out a proposal like this. If you aren’t satisfied by my hand-waving here, then that probably goes to show that it is even more difficult to make Quine’s argument come out sound.
** - This probably isn’t such a big deal, since the only atomic sentence that is a logical truth is “x = x”, and Quine seems to reckon “=” a particle.
Granted, grammatical structure is linguistic; but so is lexicon. The lexicon is used in talking about the world; but so is grammatical structure. A logical truth, staying true as it does under all lexical substitutions, admittedly depends upon none of those features of the world that are reflected in lexical distinctions; but may it not depend on other features of the world, features that our language reflects in grammatical constructions rather than its lexicon? It would be pointless to protest that grammar varies from language to language, for so does lexicon. Perhaps the logical truths owe their truth to certain traits of reality which are reflected in one way by the grammar of our language, in another way by the grammar of another language, and in a third way by the combined grammar and lexicon of a third language. (ibid., my italics)
Logical truths are about the world, or are true because they “reflect” features of the world, because their grammatical structures are about the world or reflect features of the world. In what sense could a grammatical structure possibly be about the world, say anything about the world, or “reflect” features of the world? First, we should note that grammatical structures are not about the world in the same way that sentences, names, or predicates are. Grammatical structures as such aren’t true or false like (truth-apt) sentences. By all appearances, grammatical structures don’t refer to anything in the world; Tarski’s definition of truth gets along just fine without assigning semantic values to grammatical structures. Nor do they have (Fregean) senses on any theory that I know of. Nor, intuitively, are they meaningful. If a person were to speak or write down a grammatical structure – say, by speaking or writing a sequence of particles and schematic variables for grammatical categories – I can’t see why anyone would want to say that she, or her utterance or inscription, meant anything.
We might want to say that grammatical structures say something about the world in a different sense – viz., in the sense that sentences with the same non-logical constants but different grammatical structures have different truth-conditions. “(Ax)(Cat(x))” says something different from “~(Ax)(Cat(x))” because of the difference in grammatical structure between the two. We might say that a negation symbol says that the negated sentence is false, a universal quantifier over a variable says that the sentence in the scope of the quantifier is true for all values of the bound variable, and so on.* In this way, by specifying what all of the particles or logical constants say, we can state more or less precisely what an entire grammatical structure, paired with a particular sentence instantiating it, says about the world. But two observations are in order. First, it is not clear how we should construe what the grammatical structures of atomic sentences say.** Second, and more importantly, the worldliness of a grammatical structure, in this sense, is dependent on the worldliness of the non-logical constants in the sentence instantiating it. For instance, the grammatical structure of “~(Cat(Dora))” says something about the world because “(Cat(Dora))” says something about the world – it is either true or false depending on the actual features of the thing called “Dora”. The grammatical structures of uninterpreted schemata say nothing about the world, because the talk of truth, falsity, and values of variables in our sketchy specification of what the particles say presupposes an interpretation of the lexical items in the sentence. When we admit, with Quine, that a logical truth “admittedly depends upon none of those features of the world that are reflected in lexical distinctions”, then the grammatical structure of a logical truth cannot derive its worldliness from the worldliness of the “lexical distinctions” marked by the non-logical constants in a sentence instantiating the structure. Briefly, since the worldliness of grammatical structures derives from the worldliness of the terms in the sentences instantiating them, and since the putative worldliness of logical truths does not depend on the worldliness of these terms, the grammatical structures of logical truths have nothing from which to derive their putative worldliness.
There might be some way of being about the world or reflecting features of the world that I haven’t grasped yet. Perhaps we should understand Quine as saying that we might as well postulate a sui generis mode of being about the world specific to grammatical structures. I can only say in response to this that we might as well not, both for the sake of not multiplying features of the world beyond necessity and for the sake of keeping “about the world” intelligible. Lastly, someone might say that the grammatical structure of logical truths such as “it is raining or it isn’t raining” is about the world because it reflects the fact about the world that things, in general, are or aren’t the case. But this begs the question against the proponent of the linguistic theory of logical truth. What is at issue is whether this fact is about the world.
* - It is much easier to fill in the “and so on” for a formalized language than for a natural language. What does a particle like “if” say? It seems we need a worked-out semantics for conditionals to fill out a proposal like this. If you aren’t satisfied by my hand-waving here, then that probably goes to show that it is even more difficult to make Quine’s argument come out sound.
** - This probably isn’t such a big deal, since the only atomic sentence that is a logical truth is “x = x”, and Quine seems to reckon “=” a particle.
Sunday, June 29, 2008
Saturday, June 28, 2008
Quantifiers and the Grammatical Definition of Logical Truth
The grammatical definition of logical truth, discussed here, is probably inadequate for all sorts of interesting languages, including English. The definition, from Quine's Philosophy of Logic, is this:
For Quine - and I think he's right on this count - we treat a class of words as a grammatical category, as opposed to a class of particles yielding new grammatical constructions, just in case the category is big enough. For instance, in a language with lots of intransitive verbs, we treat those as comprising a grammatical category. If an L-structure has an infinite stock of variables, we treat variables (or argument-terms, more generally) as a grammatical category; if it has three variables, we might do well to treat each as a particle.
I take it that English has an infinite - or at least a very large - stock of quantifiers. This is because I think that, in English, the quantifiers translated by "(E_)" and "(A_)" in first-order logic belong to the same grammatical category as expressions such as "There are many", "There are ten", "There are one million", and "There are innumerable". If the literature on quantifiers in natural language says otherwise, please correct me. Also, there are usually an infinite number of quantifier-expressions in languages that support generalized quantification, right? Anyway, if quantifiers all belong to the same grammatical category, and we assume the grammatical definition of logical truth, then I can't think of a single logical truth containing a quantifier. For instance, "If Steve and Janice are cats, then there are some cats" would fail to be a logical truth, since "If Steve and Janice are cats, then there are innumerable cats" - gotten by "substituting for lexicon" - is false.
I imagine the Quinean response to all of this would be to say that, given a prior commitment to standard FOL, we should translate "There are n Fs" as "The class of all Fs has cardinality n." But it seems obvious to me that the average English speaker does not, as a matter of linguistic anthropology, commit herself to the existence of the class of all Fs in uttering "There are n Fs." The nominalist cannot properly respond, "No, there is no such thing as the set of all Fs." And besides, what if we substitute "self-member" for "F"?
"a logical truth is a truth that cannot be turned false by substituting for lexicon. When for its lexical elements we substitute any other strings belonging to the same grammatical categories, the sentence is true." (2nd ed., 58)
For Quine - and I think he's right on this count - we treat a class of words as a grammatical category, as opposed to a class of particles yielding new grammatical constructions, just in case the category is big enough. For instance, in a language with lots of intransitive verbs, we treat those as comprising a grammatical category. If an L-structure has an infinite stock of variables, we treat variables (or argument-terms, more generally) as a grammatical category; if it has three variables, we might do well to treat each as a particle.
I take it that English has an infinite - or at least a very large - stock of quantifiers. This is because I think that, in English, the quantifiers translated by "(E_)" and "(A_)" in first-order logic belong to the same grammatical category as expressions such as "There are many", "There are ten", "There are one million", and "There are innumerable". If the literature on quantifiers in natural language says otherwise, please correct me. Also, there are usually an infinite number of quantifier-expressions in languages that support generalized quantification, right? Anyway, if quantifiers all belong to the same grammatical category, and we assume the grammatical definition of logical truth, then I can't think of a single logical truth containing a quantifier. For instance, "If Steve and Janice are cats, then there are some cats" would fail to be a logical truth, since "If Steve and Janice are cats, then there are innumerable cats" - gotten by "substituting for lexicon" - is false.
I imagine the Quinean response to all of this would be to say that, given a prior commitment to standard FOL, we should translate "There are n Fs" as "The class of all Fs has cardinality n." But it seems obvious to me that the average English speaker does not, as a matter of linguistic anthropology, commit herself to the existence of the class of all Fs in uttering "There are n Fs." The nominalist cannot properly respond, "No, there is no such thing as the set of all Fs." And besides, what if we substitute "self-member" for "F"?
Monday, June 23, 2008
Quine on Grammatical Structure and Logical Truth
In Philosophy of Logic, Quine offers the following definition of "logical truth": "a logical truth is a truth that cannot be turned false by substituting for lexicon. When for its lexical elements we substitute any other strings belonging to the same grammatical categories, the sentence is true." (2nd ed., 58)
Later on, considering whether to strengthen FOL to allow for adverbial modification of predicates, Quine claims that, on the definition of "logical truth" lately quoted:
This is interesting. The grammatical definition of logical truth is both epistemologically interesting and clears up a lot of confusions I have about the relationship between formal logic and natural language. I still don't know whether it adequately captures all of the intuitive cases of logical truth.
Really, my only observation here is that it seems that the grammatical definition does not make (5) a logical truth. This is because adverbs can sometimes alienate the predicates they modify. An adverb A alienates a predicate F in a sentence token S iff removal of A from S would change the truth-value of the clause of which F is a part. Briefly, A alienates F (in a certain context) if something can be F A'ly without being F simpliciter. Consider the following cases of adverbial alienation:
(1) Tim indirectly told John about Sally.
(2) Paul is coming home shortly.
(3) Sue allegedly stole the watch.
(4) Esther nearly won the tennis match.
We can imagine cases in which (1), (2), (3), and (4) are true, but their non-adverbialized counterparts aren't. There doesn't seem to be anything syntactically unusual about these adverbs. By all appearances, (1), (2), (3), and (4) have the same grammatical structure, respectively, as the following:
(1`) Tim excitedly told John about Sally.
(2`) Paul is coming home currently.
(3`) Sue actually stole the watch.
(4`) Esther barely won the tennis match.
But if (1), (2), (3), and (4) can be turned from truth to falsehood by transformation into (1`), (2`), (3`), and (4`) then, by the grammatical definition, none of these are logically true.
Later on, considering whether to strengthen FOL to allow for adverbial modification of predicates, Quine claims that, on the definition of "logical truth" lately quoted:
the sentence
(5) ~(Ex)(x walks rapidly . ~(x walks)),
or 'Whatever walks rapidly walks', would qualify as logically true. (76)
This is interesting. The grammatical definition of logical truth is both epistemologically interesting and clears up a lot of confusions I have about the relationship between formal logic and natural language. I still don't know whether it adequately captures all of the intuitive cases of logical truth.
Really, my only observation here is that it seems that the grammatical definition does not make (5) a logical truth. This is because adverbs can sometimes alienate the predicates they modify. An adverb A alienates a predicate F in a sentence token S iff removal of A from S would change the truth-value of the clause of which F is a part. Briefly, A alienates F (in a certain context) if something can be F A'ly without being F simpliciter. Consider the following cases of adverbial alienation:
(1) Tim indirectly told John about Sally.
(2) Paul is coming home shortly.
(3) Sue allegedly stole the watch.
(4) Esther nearly won the tennis match.
We can imagine cases in which (1), (2), (3), and (4) are true, but their non-adverbialized counterparts aren't. There doesn't seem to be anything syntactically unusual about these adverbs. By all appearances, (1), (2), (3), and (4) have the same grammatical structure, respectively, as the following:
(1`) Tim excitedly told John about Sally.
(2`) Paul is coming home currently.
(3`) Sue actually stole the watch.
(4`) Esther barely won the tennis match.
But if (1), (2), (3), and (4) can be turned from truth to falsehood by transformation into (1`), (2`), (3`), and (4`) then, by the grammatical definition, none of these are logically true.
Subscribe to:
Posts (Atom)