A forum for VE lucubration

Monday, October 16, 2006

Swamp Exemption Status

I’ve been thinking about swamping. Specifically, I’m wondering what is swampable. And even more specifically, under what conditions a theory of knowledge is doomed to the swamping problem. A theory of knowledge is doomed to the swamping problem if and only if___________?

My suspicion is that answering this question would require that it first be made clear which value problem for knowledge we are talking about. I’m distinguishing value problems in the way Duncan Pritchard has in a recent paper:

The primary value problem is the problem of explaining how knowledge is more valuable than true belief.
The secondary value problem is the problem of explaining how knowledge is more valuable than any proper subset of its parts.
The tertiary value problem is the problem of explaining how knowledge is of distinctive value than any/all of its subparts: i.e. a value of a different kind.

Although a successful account of the nature of knowledge would (probably) be required to answer all three of the value problems, as I take it, the swamping problem is one that arises when trying to answer the primary value problem only. Specifically, it arises when a theory of knowledge defines knowledge as true belief plus some other quality such that the value of the other quality is parasitic on the value of truth. If a theory of knowledge falls to the swamping problem, then it can’t answer successfully the primary value problem.

I think it’s pretty clear how the generic version of reliabilism is subject to the swamping problem. This is because a belief produced by a reliable process would be most obviously valuable because such a belief is likely to be true. But adding “likely to be true” to a true belief doesn’t get you anything more valuable than a true belief. Kvanvig makes this point clear in his book “The Value of Knowledge and the Pursuit of Understanding.”

What becomes confusing, though, is what “kinds” of values are swampable. Are any swamp-exempt? Instrumental, extrinsic and intrinsic value all appear to me at least, in principle, capable of being swamped by the value of truth in a theory of knowledge. More clearly, a theory of knowledge trying to resolve the primary value problem might try to explain the value of knowledge over truth by invoking a justificatory component that is valuable for any of these reasons (extrinsic, instrumental and intrinsic) and still be such that the justificatory component’s value is in some important way parasitic on the value of truth.What about final value, though? Something has final value if it is valuable for its own sake. This need not entail that it is valuable because of its intrinsic properties, although (as I understand) some things can be both intrinsically and finally valuable.

What’s not clear to me is whether final value is the sort of value that can be parasitc on truth to the extent that (for example) a justificatory component in a theory of knowledge that is finally valuable could nonetheless be swamped. Thoughts on swamping?

Monday, October 09, 2006

Timothy Williamson's 'Neo-Tethering' Solution to the Meno problem

The Meno challenges us to figure out what makes knowledge more valuable than mere true belief. This challenge becomes particularly difficult if we are on board with Socrates in thinking that knowledge and true belief are equally practically useful.

Jon Kvanvig (2003) in “The Value of Knowledge and the Pursuit of Understanding” has pointed out the flaws of quite a few attempts to solve the problem. This book is rich and instructive.

One account that he rejects, however, appears more promising than others. This is the account given by Timothy Williamson (2000), which claims that knowledge is more valuable than true belief because of its relative cross-temporal permanence to true belief. That is, Williamson takes it that knowledge is less likely to be undermined by future evidence than is true belief.

Williamson’s account is a probabilistic account, and it appears to hold true in our world, even though (as Kvanvig points out) not all cases of knowledge in our world are more permanent than mere true beliefs. This is because of facts about belief fixation. Some of our beliefs are fixed pragmatically—for example—they are instinctive, and not fixed evidentially. When such beliefs are true but not known, then we have mere true belief that would appear particularly resistant to being undermined, more resistant it is likely than some of our knowledge.

From this, Kvanvig supposes that in some worlds, unlike ours, the majority of beliefs will be fixed pragmatically and non-evidentially; thus, in some worlds, knowledge is less permanent than true belief.

Does such a view undermine Williamson’s claim?

Kvanvig thinks that it does, and he reasons as follows: Because some possible worlds with a preponderance of pragmatic belief fixation would be such that knowledge would be less permanent than mere true belief, then Williamson’s claim is merely a contingent truth, which will hold only in some worlds in which knowledge is more permanent than true belief.

This is bad, he thinks, because “It is simply false that knowledge loses its value in worlds where the environment is less cooperative and where pragmatics play a more significant role in belief fixation.” (2003b p. 17)

I agree with Kvanvig that knowledge doesn’t appear, at least prima facie, like the sort of thing that would lose its value in some worlds. But are we entitled to draw the conclusion that knowledge would lose its value in some worlds from the fact that Williamson’s account would be false in some worlds? I’m not sure we are. It seems like all we can conclude is that, in such worlds (where pragmatics play a significant role in belief fixation), the value of knowledge over true belief in such worlds would not be its permanence. Of course, this would still leave open the possibility that knowledge could be valuable (and more valuable than true belief) for different reasons in such worlds. And so, it doesn't seem like we must reach the conclusion that knowledge must lose its value in such worlds (which Kvanvig seems to take as a reductio to the view).
One would, of course, dismiss my suggestion if they took it that the value conferring property of knowledge must hold across all possible worlds, but I don’t see what that would be so. (For example, in some worlds, winter coats might be eaten for sustenance and be valuable for that reason; in our world, we wear them). Is it obvious that knowledge is so relevantly dissimilar that a world-relative account of value is plausible in the former case and implausible in the latter?
One problem though, I think, for defending Williamson’s view as an response to the Meno problem is that it appears to be making a universal claim, and as Kvanvig shows (through some counterexamples), there are particular examples that undermine a universal claim about knowledge’s permanence.
If Williamson’s goal, though, is to explain why knowledge is more valuable than true belief for us, perhaps he should weaken his view and claim that “knowledge is significantly more likely to be more permanent than is true belief.” Embedding this probability claim within probability claim muddies the notion, but perhaps it is a move in the right direction. I’ll stop rambling now; I’m open for suggestions on this (if you couldn’t already tell)

Wednesday, October 04, 2006

More on Driver, the Virtue Conflation Problem and Epistemic Egoism

I’ve been thinking about the distinction between intellectual and moral virtues at the level of value-conferring property. Julia Driver has argued in her paper “The Conflation of Moral and Epistemic Virtue” that what distinguishes moral and intellectual virtues is a matter of what good are produced by these respective virtues, as opposed to (for example) the ends toward which the respected virtues are motivated. And so, Driver employs a consequentialist account in an effort to distinguish between the virtues. On the surface of it, I don’t see anything implausible about employing a consequentialist account here. Driver makes a thorough case for a consequentialist account of virtue, in general, in her 2001 monograph “Uneasy Virtue”. What I am concerned about, though, is whether her consequentialist distinction between moral and intellectual virtues at the level of value-conferring property is one that should posit that intellectual virtues are valuable for the reason that they produce epistemic good (i.e. knowledge, truth) for the agent. This is problematic, I think, because it allows for someone who cares only about gaining truth for himself (and not maximizing epistemic value in general) to qualify as intellectually virtuous. Hence, Driver’s account of what makes an intellectual virtue valuable is one that condones unbridled epistemic egoism. I offer the following example to illustrate this:

Unlike his brother Ebeneezer, who values monetary goods and misers them, Ludwig Scrooge values doxastic goods. Ludwig believes that knowledge and true belief are valuable. Also, because not everyone has access to all facts, and because some individuals have faulty cognitive equipment, not everyone enjoys a surfeit of these goods. Ludwig is aware of this fact, and determines that if he can acquire more of this good than others, then he will be better off. Ludwig, following this reasoning, embraces the view of epistemic egoism: (as David Gauthier puts it) Ludwig is “…a person who on every occasion and in every respect acts to bring about as much as possible of what he values .” He realizes that there are two ways for maximize what he values: he can maximize knowledge and truth simpliciter, or he can maximize them for himself only. As it stands, Ludwig has no desire to see anyone else but himself attain doxastic goods. And, in fact, he reasons: “If there is some stranger who could, at some time t, acquire some truth N such that I would never gain from it, then I would prefer that that stranger did not acquire N, but rather, some falsehood instead. This would make me better off.”
Just as Ludwig’s brother Ebeneezer became adept with the skills of profiting as a result of desiring his personal monetary gain, Ludwig has cultivated the skills of profiting as a result of desiring what is epistemically valuable for himself. For example, he is intellectually tenacious in forming his beliefs, he is shrewd in his calculation of evidence, he is conscious to recognize his own biases that might affect his belief-forming processes, etc. In addition to possessing these characteristics, Ludwig also has developed a habit of keeping quiet when others want to know information that he knows. He concludes that his goal of maximizing personal epistemic value is better achieved by his deceiving others into thinking he lacks knowledge of some fact, rather than sharing this fact. “I don’t know” Ludwig will say, for example, when someone who wishes to know the way to town questions him, and he knows. (Note: this strategy is shared in the monetary domain by Ebeneezer, who deceives others into thinking he has no money when they ask to borrow it, and whilst he has plenty in his pocket). Ebeneezer becomes the wealthiest man in town through his tactics, and Ludwig becomes the most knowledgeable. Ebeneezer, though wealthy, surely is not morally virtuous. Is Ludwig, though knowledgeable, intellectually virtuous?

I think the answer we should reach is “no.” (I’m at work on a paper in which I’m arguing this).
I’m interested get some intuitions on this…