I’ve found that evil usually triumphs unless good is very, very careful.
I posted yesterday about what I called the Positive Buzz fallacy:
Activity z is the best way to accomplish goal y.
Therefore, activity z is the best way to accomplish goals.
I realized today that it is closely related to a fallacy that people often commit in misunderstanding natural selection: An organism is fittest in a given environment, and the fallacifier infers that it’s simply best.
It occurs to me that there is a mistake in my previous post, but it can be patched up.
To review: Considerations of inductive or ampliative risk can make the difference between it being appropriate to believe something and it being inappropriate. If the stakes are high, then you might demand more evidence than if the stakes are low.
Schematically, what’s relevant are conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing P if it is false.
In cases of ampliative risk, the evidence does not overwhelmingly speak for or against. So the determination to believe or not depends in part on the stakes involved. I’ve typically put this in terms of conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing if it is false. Heather Douglas calls this values playing an indirect role.
Implicit in this is that believing P if it is false is a cost. And so on. Ending up with accurate beliefs is generally good, and ending up with inaccurate beliefs is bad. What’s at issue is not the general valence of certain outcomes but instead their intensity.
Abstract: Scott Aikin and Robert Talisse have recently argued strenuously against James’ permissivism about belief. They are wrong, both about cases and about the general issue. In addition to the usual examples, the paper considers the importance of permissiveness in scientific discovery. The discussion highlights two different strands of James’ argument: one driven by doxastic efficacy and another driven by inductive risk. Although either strand is sufficient to show that it is sometimes permissible to believe in the absence of sufficient evidence, the two considerations have different scope and force.
Having just taught a seminar on pragmatism and reading a recent book review by Alex Klein, I realized that there’s a reading of William James’ “Will to Believe” which isn’t universally recognized even though it seems obvious to me.
Today was the last class meeting of my pragmatism seminar. I had the students each make a presentation on their seminar papers in-progress. Some students were further along in their thinking others, but it gave everyone a chance to try out arguments and to exchange ideas.
The course syllabus covered more than 20 authors, but student interest was fairly focused.
2 students are writing on issues of truth and objectivity.
3 students are writing on Jane Addams and ethical method.
2 students are writing on issues of ethical method and objectivity, one with an eye toward Nelson Goodman and the other by way of C.I. Lewis.
All the projects sound interesting, but I wouldn’t have expected this distribution.
Because this was me, the readings were weighted more toward philosophy of science and epistemology than toward ethics and value theory. But nobody is writing about philosophy of science. 😒
Jane Addams was a late addition to the syllabus, and I wasn’t sure how it was going to work out until the class meeting on her work. She was a hit. 😃
Now it’s just grading and administrative work between me and the end of the semester. 😰
I’ve been blogging recently about whether “pragmatism” is a sufficiently precise term to be one which we ought to use, apart from its being historically entrenched. In the course of reading Dewey again, I’m thinking about another aspect of the pragmatist tradition.
James says that pragmatism is, in one sense, a method. It’s typically expressed by the pragmatic maxim that discovering the meaning of a concept is best done by tracing out its practical consequences.
In the previous post, I suggested that there might be no unified “pragmatism”. By this I meant that we wouldn’t (as a matter of philosophical method) want to invent the term if it weren’t (as a matter of the history of philosophy) already entrenched and an actors’ category. I’m not sure if I want to take that back, but I do want to talk about something in the neighborhood of “pragmatism” that probably deserves a name.