Payoffs in will to believe cases

In thinking about James’ Will to Believe (in a blog post and a draft paper) I distinguish two kinds of cases.

In cases of ampliative risk, the evidence does not overwhelmingly speak for or against. So the determination to believe or not depends in part on the stakes involved. I’ve typically put this in terms of conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing if it is false. Heather Douglas calls this values playing an indirect role.

Implicit in this is that believing P if it is false is a cost. And so on. Ending up with accurate beliefs is generally good, and ending up with inaccurate beliefs is bad. What’s at issue is not the general valence of certain outcomes but instead their intensity.

So what makes it a case of ampliative risk is the signs of the values in the payoff matrix. Schematically, it looks like this:

ampliative risk
P is true.
P is False.
You believe that P.
+ A
– B
You do not believe that P.
– C
+ D

This different than cases of doxastic efficacy in which believing P plays a signficant role in producing P or making it happen. The over-used example is climbing a mountain, standing on a crumbling ledge. You must jump or you will die. Moreover, we stipulate that your success at the jump depends on your believing that you will make it. Harbour doubt, and you will die.

The payoffs are notably different here. The value of having a true or false belief is eclipsed by the value of making the jump. The payoff matrix looks roughly like this:

doxastic efficacyYou make the jump.
You fail.
You believe that you will make the jump.
+ E
– F
You do not believe.
+ E
– F

Note that this payoff matrix might also hold for a third party watching you from a distance (with their belief replacing yours) provided that they like you a lot. Yet, for them, it is not a case of doxastic efficacy. Their believing or not makes no difference as to whether you survive or you don’t.

Suppose that you slip from the ledge and die. Your credence would be irrelevant, because you’d be dead. The values of E and F haven’t changed, though, so— if the second payoff matrix justified a third party in believing you would make it before, then it still does. And that’s absurd.1

Consider also cases in which a belief has an effect on something that is distinct from the content of the belief. For example, suppose that a terrorist will blow up a bus full of baby otters unless you form the belief that margarine is better than butter. Call these baby otter cases.

It’s probably impossible for you to form such a belief under such circumstances, but it also seems as if any justification you might have is decidedly unepistemic. Whether margarine is better than butter is not really in question and doesn’t really matter. You just don’t want the baby otters to die. So the payoff matrix looks like the one for doxastic efficacy.

Sometimes Jamesian pragmatism is interpreted as allowing or requiring beliefs in the baby otter cases. Whatever one might say about arguments that James makes elsewhere, however, the considerations in ‘The Will to Believe’ don’t have that consequence. The last case is neither one of ampliative risk nor of doxastic efficacy.

I don’t work this out in terms of payoff matrices in the paper, because it’s only schematic formalism. The lessons are there in the prose of the paper.


  1. A third-person observer might be justified in believing that you will survive on grounds of ampliative risk, when it is still unclear whether you will survive or not. The cost of giving up hope if you did make it might be rather large (the C term, in the matrix above).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.