On giving up many small things

Last year I attended the annual Values in Medicine, Science, and Technology Conference hosted in Dallas and organized by Matt Brown.1

I got great feedback on my presentation, which ultimately grew into a paper. I hung out with old friends and made new ones.

So I submitted an abstract again this year. Today, I received an e-mail indicating that my paper was accepted along with an e-mail saying that the conference was canceled. The cancelation was inevitable, of course, but Matt had delayed officially canceling the conference until verdicts had been reached. This way would-be presenters can list the acceptance on their CV. It’s a classy move— I don’t need the line on my CV, but students and junior scholars might do.2

My missing the conference this year is not a terrible imposition, really, since I missed it for eight years before attending at all. It is a small sacrifice, in the grand scheme of things— but these accumulate like rain drops on the tin roof that is my inability to land a metaphor.

Continue reading “On giving up many small things”

Papers on the JRD thesis

In several papers, I’ve made use of what I call the James-Rudner-Douglas (JRD) thesis: “Anytime a scientist announces a judgement of fact, they are making a tradeoff between the risk of different kinds of error. This balancing act depends on the costs of each kind of error, so scientific judgement involves assessments of the value of different outcomes.”

I have a paper forthcoming in Episteme which explores this theme as well as other issues in William James’ “The Will to Believe.” I just sent off my final draft and brought the version on my website up to date.

I’ve given a couple of talks in which I mull over possible counterexamples to the thesis. I recently wrote that as a paper, I’ve now posted a draft. Comments are welcome.

Risky business

My paper with Dan Hicks and Jessey Wright, Inductive Risk, Science, and Values: a reply to MacGillivray, has been accepted at the journal Risk Analysis. It went from social media musing to accepted publication in just a few months.

Back in July, Dan wrote a tweet that concluded “Anyone want to write a little response with me?” Jessey and I replied that we’d be game for it. E-mails followed. We each wrote a snippet of prose. The snippets got worked together into one document, and that document went through a bunch of revisions. We used a google doc, which highlighted changes and allowed us to make comments back and forth in the document itself. Other than a few e-mails, that’s how we interacted. No realtime conversations, even via skype.

I still use LaTeX for my own writing, but the collaborative workflow of the google doc worked really well for this project.

Why evidentialism is tantamount to scepticism

Over on Facebook, Carl Sachs offers a send-up of evidentialist reasoning. In the comments, I boil the argument down to this:

  1. Only believe on the basis of univocally sufficient evidence.
  2. Evidence is never univocally sufficient.
  3. Therefore, don’t believe!

It’s valid, but are the premises true?

Continue reading “Why evidentialism is tantamount to scepticism”

What’s so funny about credence, love, and understanding?

I can’t tell if Suki Finn’s Beyond Reason: The Mathematical Equation for Unconditional Love is meant to be taken seriously or not. Irony on the internet is usually indistinguishable from earnestness. The fact that there is an addendum with a mathematical proof may indicate that it’s serious, but maybe it’s a droll bit of farce?1

I read it with interest, in any case. Finn offers an analysis of conditional and unconditional love that is modeled on conditional and unconditional credence. As I’ve discussed in some recent posts, I think that recognizing the difference between conditional and unconditional value is crucial for understanding the relation between values and belief.2

Continue reading “What’s so funny about credence, love, and understanding?”

On science and values, accepted and forthcoming

My paper Science, Values, and the Priority of Evidence has been accepted at Logos&Episteme. I worked over the manuscript to meet their style guidelines, sent it off, and put the last draft on my website. Since it’s an OA journal, in the gratis and author-doesn’t-pay sense, I will swap in the published version when it appears.

Now that the paper is actually forthcoming, it can be cited rather than having the ideas from it attributed to me by second-hand personal communication.

Continue reading “On science and values, accepted and forthcoming”

Oblique citation and direct rejection

In his PhD thesis, Stijn Conix briefly considers the suggestion “that it does not make sense to think of values and epistemic standards as taking priority over each other.”1 In a footnote, he cites Matthew Brown “who refers to Magnus making a similar remark in personal communication.”

That’s cool, because I have made such a remark. I have a draft paper in which I defend it.

Frustratingly, today I got another rejection notice for that paper. I’ll take a day to cool off before looking at the referee comments again, and then I’ll decide on my next move. The most effective strategy for disseminating ideas might be to just talk to Matt Brown more often. Alas, that’s hard to document on my CV.

Continue reading “Oblique citation and direct rejection”

A further comment about payoffs in will to believe cases

It occurs to me that there is a mistake in my previous post, but it can be patched up.

To review: Considerations of inductive or ampliative risk can make the difference between it being appropriate to believe something and it being inappropriate. If the stakes are high, then you might demand more evidence than if the stakes are low.

Schematically, what’s relevant are conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing P if it is false.

Continue reading “A further comment about payoffs in will to believe cases”

Payoffs in will to believe cases

In thinking about James’ Will to Believe (in a blog post and a draft paper) I distinguish two kinds of cases.

In cases of ampliative risk, the evidence does not overwhelmingly speak for or against. So the determination to believe or not depends in part on the stakes involved. I’ve typically put this in terms of conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing if it is false. Heather Douglas calls this values playing an indirect role.

Implicit in this is that believing P if it is false is a cost. And so on. Ending up with accurate beliefs is generally good, and ending up with inaccurate beliefs is bad. What’s at issue is not the general valence of certain outcomes but instead their intensity.

Continue reading “Payoffs in will to believe cases”

Having been written, more like is to believe

The paper which began as a blog post now exists as a draft.

Risk and Efficacy in ‘The Will to Believe’

Abstract: Scott Aikin and Robert Talisse have recently argued strenuously against James’ permissivism about belief. They are wrong, both about cases and about the general issue. In addition to the usual examples, the paper considers the importance of permissiveness in scientific discovery. The discussion highlights two different strands of James’ argument: one driven by doxastic efficacy and another driven by inductive risk. Although either strand is sufficient to show that it is sometimes permissible to believe in the absence of sufficient evidence, the two considerations have different scope and force.

Portrait of William James by John La Farge, circa 1859. via Wikimedia.