Last year I attended the annual Values in Medicine, Science, and Technology Conference hosted in Dallas and organized by Matt Brown.1
I got great feedback on my presentation, which ultimately grew into a paper. I hung out with old friends and made new ones.
So I submitted an abstract again this year. Today, I received an e-mail indicating that my paper was accepted along with an e-mail saying that the conference was canceled. The cancelation was inevitable, of course, but Matt had delayed officially canceling the conference until verdicts had been reached. This way would-be presenters can list the acceptance on their CV. It’s a classy move— I don’t need the line on my CV, but students and junior scholars might do.2
My missing the conference this year is not a terrible imposition, really, since I missed it for eight years before attending at all. It is a small sacrifice, in the grand scheme of things— but these accumulate like rain drops on the tin roof that is my inability to land a metaphor.
In several papers, I’ve made use of what I call the James-Rudner-Douglas (JRD) thesis: “Anytime a scientist announces a judgement of fact, they are making a tradeoff between the risk of different kinds of error. This balancing act depends on the costs of each kind of error, so scientific judgement involves assessments of the value of different outcomes.”
I have a paper forthcoming in Episteme which explores this theme as well as other issues in William James’ “The Will to Believe.” I just sent off my final draft and brought the version on my website up to date.
I’ve given a couple of talks in which I mull over possible counterexamples to the thesis. I recently wrote that as a paper, I’ve now posted a draft. Comments are welcome.
Back in July, Dan wrote a tweet that concluded “Anyone want to write a little response with me?” Jessey and I replied that we’d be game for it. E-mails followed. We each wrote a snippet of prose. The snippets got worked together into one document, and that document went through a bunch of revisions. We used a google doc, which highlighted changes and allowed us to make comments back and forth in the document itself. Other than a few e-mails, that’s how we interacted. No realtime conversations, even via skype.
I still use LaTeX for my own writing, but the collaborative workflow of the google doc worked really well for this project.
I can’t tell if Suki Finn’s Beyond Reason: The Mathematical Equation for Unconditional Love is meant to be taken seriously or not. Irony on the internet is usually indistinguishable from earnestness. The fact that there is an addendum with a mathematical proof may indicate that it’s serious, but maybe it’s a droll bit of farce?1
I read it with interest, in any case. Finn offers an analysis of conditional and unconditional love that is modeled on conditional and unconditional credence. As I’ve discussed in some recentposts, I think that recognizing the difference between conditional and unconditional value is crucial for understanding the relation between values and belief.2
My paper Science, Values, and the Priority of Evidence has been accepted at Logos&Episteme. I worked over the manuscript to meet their style guidelines, sent it off, and put the last draft on my website. Since it’s an OA journal, in the gratis and author-doesn’t-pay sense, I will swap in the published version when it appears.
In his PhD thesis, Stijn Conix briefly considers the suggestion “that it does not make sense to think of values and epistemic standards as taking priority over each other.”1 In a footnote, he cites Matthew Brown “who refers to Magnus making a similar remark in personal communication.”
That’s cool, because I have made such a remark. I have a draft paper in which I defend it.
Frustratingly, today I got another rejection notice for that paper. I’ll take a day to cool off before looking at the referee comments again, and then I’ll decide on my next move. The most effective strategy for disseminating ideas might be to just talk to Matt Brown more often. Alas, that’s hard to document on my CV.
It occurs to me that there is a mistake in my previous post, but it can be patched up.
To review: Considerations of inductive or ampliative risk can make the difference between it being appropriate to believe something and it being inappropriate. If the stakes are high, then you might demand more evidence than if the stakes are low.
Schematically, what’s relevant are conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing P if it is false.
In cases of ampliative risk, the evidence does not overwhelmingly speak for or against. So the determination to believe or not depends in part on the stakes involved. I’ve typically put this in terms of conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing if it is false. Heather Douglas calls this values playing an indirect role.
Implicit in this is that believing P if it is false is a cost. And so on. Ending up with accurate beliefs is generally good, and ending up with inaccurate beliefs is bad. What’s at issue is not the general valence of certain outcomes but instead their intensity.
Abstract: Scott Aikin and Robert Talisse have recently argued strenuously against James’ permissivism about belief. They are wrong, both about cases and about the general issue. In addition to the usual examples, the paper considers the importance of permissiveness in scientific discovery. The discussion highlights two different strands of James’ argument: one driven by doxastic efficacy and another driven by inductive risk. Although either strand is sufficient to show that it is sometimes permissible to believe in the absence of sufficient evidence, the two considerations have different scope and force.