Having just taught a seminar on pragmatism and reading a recent book review by Alex Klein, I realized that there’s a reading of William James’ “Will to Believe” which isn’t universally recognized even though it seems obvious to me.
James argues that there are cases in which our passions, rather than reasons alone, may legitimately be allowed to determine what we believe. So we ought “to respect one another’s mental freedom” in such cases, and each of us should believe as we are moved by our hearts and passions to believe.
We can distinguish two considerations.
Doxastic Efficacy: James considers cases in which believing that p makes it more likely that p will actually turn out to be true. James suggests that friendship can be like this. He elsewhere offers the example of a mountain climber who must leap across a chasm and is more likely to succeed if he believes that he can do it. Because he’d very much prefer to make the leap and survive, it’s OK that he believes even though the absence that he’ll succeed is uncompelling.
Ampliative Risk: James notes that there is a conflict between our duties to believe truth and shun error. Believing or suspending belief requires arriving at a balance between these duties. Yet there is no uniquely rational way to balance these duties. So finding the evidence and reasons to be enough in a particular instance always partly reflects a value judgment rather than just the evidence alone.
Some scholars, including the authors who are the target of Alex’s review, see James’ argument as really only being about Doxastic Efficacy. They don’t consider Ampliative Risk to be a separate and free-standing consideration. It seems obvious to me that they’re wrong.
The two considerations are importantly different.
Doxastic Efficacy is more limited in a sense, because it only applies to cases where believing p makes p more probable.1 It is not limited on the values that are relevant, however. Anything that would make p good or not-p bad could serve to justify believing it. Wishful thinking is defensible when wishing might make it so.
Ampliative Risk, on the other hand, does not allow just any values. It doesn’t matter how good p would be or how bad it would be if not-p. All that matters are the conditional values of how good it would be to believe p if p were true, how bad it would be to believe p if p were false, and so on.2 Yet there is always some risk of being wrong if one believes anything contingent, so Ampliative Risk arises for every belief.
So the question is whether these points are as obvious to most everyone else as it is to me. If so, then I’m making the small point that a few philosophers are wrong. Fine for a blog post. If not, then maybe I should write a paper.3
- This even needs to be refined somewhat. For any q, S’s believing p increases the probability that “q and S believes that p“— but obviously we don’t want to count cases like that.
- These are what Heather Douglas would call values playing an indirect role.
- I always have a creeping worry when writing any history of philosophy that all of my thoughts have already been published by someone else fifty or a hundred years ago.