Papers on the JRD thesis

In several papers, I’ve made use of what I call the James-Rudner-Douglas (JRD) thesis: “Anytime a scientist announces a judgement of fact, they are making a tradeoff between the risk of different kinds of error. This balancing act depends on the costs of each kind of error, so scientific judgement involves assessments of the value of different outcomes.”

I have a paper forthcoming in Episteme which explores this theme as well as other issues in William James’ “The Will to Believe.” I just sent off my final draft and brought the version on my website up to date.

I’ve given a couple of talks in which I mull over possible counterexamples to the thesis. I recently wrote that as a paper, I’ve now posted a draft. Comments are welcome.

Irreconcilable differences

Last time I taught a seminar on pragmatism, I began to doubt that it made sense to see “pragmatism” as a movement and blogged about the fundamental differences between Peirce’s and James’ positions. I’m teaching pragmatism again, and the differences are even more salient this time.

What follows is a somewhat rambling discussion of differences between Peirce and James on method, on truth, and in their general outlook.

Continue reading “Irreconcilable differences”

Pragmatism texts

I’m teaching a seminar on pragmatism again this semester, so I’ve updated some of the texts in my pragmatism and American philosophy repository. My habits for using git are terrible, so lots of small changes got swept together in one giant update.

The big addition is LaTeX and PDF files for William Clifford’s “The Ethics of Belief.” Although it’s in the public domain, I was unable to find an unabridged version anywhere on the net.1 So I spent some time today making one.2 Starting from OCR on a scan of the 19th-century original, I fixed the formatting, cleaned up the transcription, and whatnot. There may still be some errors, but it’s better than anything else I could find.

The update also adds the third lecture to James’ Pragmatism.

The buzz, the bees, and belief

I’ve found that evil usually triumphs unless good is very, very careful.

Leonard McCoy

I posted yesterday about what I called the Positive Buzz fallacy:

  1. Activity z is the best way to accomplish goal y.
  2. Therefore, activity z is the best way to accomplish goals.

I realized today that it is closely related to a fallacy that people often commit in misunderstanding natural selection: An organism is fittest in a given environment, and the fallacifier infers that it’s simply best.

Continue reading “The buzz, the bees, and belief”

Why evidentialism is tantamount to scepticism

Over on Facebook, Carl Sachs offers a send-up of evidentialist reasoning. In the comments, I boil the argument down to this:

  1. Only believe on the basis of univocally sufficient evidence.
  2. Evidence is never univocally sufficient.
  3. Therefore, don’t believe!

It’s valid, but are the premises true?

Continue reading “Why evidentialism is tantamount to scepticism”

A further comment about payoffs in will to believe cases

It occurs to me that there is a mistake in my previous post, but it can be patched up.

To review: Considerations of inductive or ampliative risk can make the difference between it being appropriate to believe something and it being inappropriate. If the stakes are high, then you might demand more evidence than if the stakes are low.

Schematically, what’s relevant are conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing P if it is false.

Continue reading “A further comment about payoffs in will to believe cases”

Payoffs in will to believe cases

In thinking about James’ Will to Believe (in a blog post and a draft paper) I distinguish two kinds of cases.

In cases of ampliative risk, the evidence does not overwhelmingly speak for or against. So the determination to believe or not depends in part on the stakes involved. I’ve typically put this in terms of conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing if it is false. Heather Douglas calls this values playing an indirect role.

Implicit in this is that believing P if it is false is a cost. And so on. Ending up with accurate beliefs is generally good, and ending up with inaccurate beliefs is bad. What’s at issue is not the general valence of certain outcomes but instead their intensity.

Continue reading “Payoffs in will to believe cases”