On science and values, accepted and forthcoming

My paper Science, Values, and the Priority of Evidence has been accepted at Logos&Episteme. I worked over the manuscript to meet their style guidelines, sent it off, and put the last draft on my website. Since it’s an OA journal, in the gratis and author-doesn’t-pay sense, I will swap in the published version when it appears.

Now that the paper is actually forthcoming, it can be cited rather than having the ideas from it attributed to me by second-hand personal communication.

Continue reading “On science and values, accepted and forthcoming”

Oblique citation and direct rejection

In his PhD thesis, Stijn Conix briefly considers the suggestion “that it does not make sense to think of values and epistemic standards as taking priority over each other.”1 In a footnote, he cites Matthew Brown “who refers to Magnus making a similar remark in personal communication.”

That’s cool, because I have made such a remark. I have a draft paper in which I defend it.

Frustratingly, today I got another rejection notice for that paper. I’ll take a day to cool off before looking at the referee comments again, and then I’ll decide on my next move. The most effective strategy for disseminating ideas might be to just talk to Matt Brown more often. Alas, that’s hard to document on my CV.

Continue reading “Oblique citation and direct rejection”

A further comment about payoffs in will to believe cases

It occurs to me that there is a mistake in my previous post, but it can be patched up.

To review: Considerations of inductive or ampliative risk can make the difference between it being appropriate to believe something and it being inappropriate. If the stakes are high, then you might demand more evidence than if the stakes are low.

Schematically, what’s relevant are conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing P if it is false.

Continue reading “A further comment about payoffs in will to believe cases”

Payoffs in will to believe cases

In thinking about James’ Will to Believe (in a blog post and a draft paper) I distinguish two kinds of cases.

In cases of ampliative risk, the evidence does not overwhelmingly speak for or against. So the determination to believe or not depends in part on the stakes involved. I’ve typically put this in terms of conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing if it is false. Heather Douglas calls this values playing an indirect role.

Implicit in this is that believing P if it is false is a cost. And so on. Ending up with accurate beliefs is generally good, and ending up with inaccurate beliefs is bad. What’s at issue is not the general valence of certain outcomes but instead their intensity.

Continue reading “Payoffs in will to believe cases”

Having been written, more like is to believe

The paper which began as a blog post now exists as a draft.

Risk and Efficacy in ‘The Will to Believe’

Abstract: Scott Aikin and Robert Talisse have recently argued strenuously against James’ permissivism about belief. They are wrong, both about cases and about the general issue. In addition to the usual examples, the paper considers the importance of permissiveness in scientific discovery. The discussion highlights two different strands of James’ argument: one driven by doxastic efficacy and another driven by inductive risk. Although either strand is sufficient to show that it is sometimes permissible to believe in the absence of sufficient evidence, the two considerations have different scope and force.

Portrait of William James by John La Farge, circa 1859. via Wikimedia.

The value in the algorithm

The LA Times has an interesting interview with self-described “data skeptic” Cathy O’Neil, the author of Weapons of Math Destruction. Although the Times puts her skepticism in terms of big data, her concerns are really about values in science. Algorithms, she suggests, have a veneer of objectivity but always reflect choices and valuations. When the algorithms are secret, then the values incorporated in them aren’t open to scrutiny. She says:

I want to separate the moral conversations from the implementation of the data model that formalizes those decisions. I want to see algorithms as formal versions of conversations that have already taken place.

She also makes a point about how polling isn’t just objectively reporting on the state of the electorate, something I would probably have mused about if I’d written the post about the election that I never quite wrote:

[P]olitical polls are actually weapons of math destruction. They’re very influential; people spend enormous amounts of time on them. They’re relatively opaque. But most importantly, they’re destructive in various ways. In particular, they actually affect people’s voting patterns. … Polls can change people’s actual behavior, which disrupts democracy in a direct way.

I’ve ordered a copy of her book, and when it arrives I will put it on top of the stack of books I regret not having read.

Why values and science?

There are a number of different connections between values to science. These sometimes get lumped together in the values and science literature. Even when they are distinguished, it isn’t always noted that each connection (1) applies to somewhat different values and (2) applies to somewhat different aspects or parts of science.

I distinguish five different ways in which values and science are connected in a preliminary attempt to sort some of this out.

Continue reading “Why values and science?”

The scope and force of epistemic risk

By coincidence, my seminar on science and values covered Rudner’s Argument from Inductive Risk on the same day that Matt Brown posted an exchange about the Argument with Joyce Havstad. It’s taken me a couple of days to collect my thoughts.

Continue reading “The scope and force of epistemic risk”