1. Publishers usually set the prices of philosophy books so as to exploit the market, rather than so as to maximize readership. I hate my publisher especially, but putting ideas in books often means sequestering them where they won’t be read.
2. Most philosophy is best done in journal articles, both for reasons of style and dissemination. Philosophy is no longer a discipline that requires a book for tenure. So the obvious response to 1 is just not to write books.
Nevertheless, there are still some projects that make sense as books rather than as articles. So what’s one to do?
3. For a textbook, I can offer it as an Open Education Resource. If it meets a need, other people will use it. And it can be acknowledged as legit after the fact.
4. For a monograph, I can share an unformatted draft in the same way I do for articles. This kind of self-archiving (Green OA) should be more common than it is, but that’s a rant for another post.
The thing I’m puzzling about is what alternatives there are for the published book itself.
5. This post felt like it should be a list of numbered points, even though it looks pretentious now that I’ve typed it out.
In his PhD thesis, Stijn Conix briefly considers the suggestion “that it does not make sense to think of values and epistemic standards as taking priority over each other.”1 In a footnote, he cites Matthew Brown “who refers to Magnus making a similar remark in personal communication.”
That’s cool, because I have made such a remark. I have a draft paper in which I defend it.
Frustratingly, today I got another rejection notice for that paper. I’ll take a day to cool off before looking at the referee comments again, and then I’ll decide on my next move. The most effective strategy for disseminating ideas might be to just talk to Matt Brown more often. Alas, that’s hard to document on my CV.
I haven’t read it all yet, but I enjoyed SooJin Lee’s piece on MoMA’s exhibition of the original emoji. Lee argues on institutional grounds that the exhibition is sufficient to make the original emoji count as art.
Several years ago, my colleague Jason D’Cruz and I set on the idea of writing something about Goodman’s autographic/allographic distinction. In the course of our discussions, he introduced me to Sol LeWitt’s wall drawings. I went down a rabbit hole of reading about them. I saw the exhibition at MassMOCA. I devised a wall drawing of my own.
The referee commented that this note could have appeared in a longer paper about conceptualism and the nature of art. It could have, perhaps, except that waiting on that longer paper to write itself would probably mean never publishing this bit.
At my old blog, I used to whinge every couple of years about whether my papers were getting longer or shorter as I got older. The gist was that there was a shallow upward trend. It’s been about five years. I have a short note forthcoming, so I’ve thought about it again. Here’s the updated scatter plot.
It occurs to me that there is a mistake in my previous post, but it can be patched up.
To review: Considerations of inductive or ampliative risk can make the difference between it being appropriate to believe something and it being inappropriate. If the stakes are high, then you might demand more evidence than if the stakes are low.
Schematically, what’s relevant are conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing P if it is false.
In cases of ampliative risk, the evidence does not overwhelmingly speak for or against. So the determination to believe or not depends in part on the stakes involved. I’ve typically put this in terms of conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing if it is false. Heather Douglas calls this values playing an indirect role.
Implicit in this is that believing P if it is false is a cost. And so on. Ending up with accurate beliefs is generally good, and ending up with inaccurate beliefs is bad. What’s at issue is not the general valence of certain outcomes but instead their intensity.
Abstract: Scott Aikin and Robert Talisse have recently argued strenuously against James’ permissivism about belief. They are wrong, both about cases and about the general issue. In addition to the usual examples, the paper considers the importance of permissiveness in scientific discovery. The discussion highlights two different strands of James’ argument: one driven by doxastic efficacy and another driven by inductive risk. Although either strand is sufficient to show that it is sometimes permissible to believe in the absence of sufficient evidence, the two considerations have different scope and force.
I’ve been thinking lately about dissertations. The traditional model is for a PhD student to write a book-length exploration of a topic. A newer model is for the student to write several publishable papers on related topics. I’ve heard the former called the monograph dissertation, which naturally makes the latter a polygraph dissertation.1
I have met some philosophers who are hostile to the polygraph dissertation, but not for any clear reasons. I’ve met others who welcome the new model. As someone advising graduate students, I would like to have a better sense of what the disciplinary norms are.2