The Blue Room. Seen here in one of its early incarnations, twenty years ago when it was still at the old cybercity domain before moving to io and then later to pair, where tonight I’ve laid it to rest. Farewell old friend. We used to be something. pic.twitter.com/q1WmuTW6u3
It resonated with me when John Holbo wrote recently: “Remember when there were blogs? Ah, those were the good old days.”
It gestured back to a day when the internet was built mostly out of individuals putting together things which they cared about and sharing them on a server somewhere in the world. The internet was the magic maze which let everybody else wander around and marvel at the wonders.
Over on Facebook, Matt Brown linked to my previous post and some interesting discussion ensued.1 In one sub-thread, Matt makes some distinctions between different types of OA. He mentions one I hadn’t seen before, Copper OA, coined by Egon Willighagen and defined this way:
1. the author(s) remain copyright owners,
2. the work is made available under an Open license to all users a free, irrevocable, worldwide, right of access to and a license to copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works in any digital medium for any purpose, subject to proper attribution of authorship, as well as any further rights we associate with Open as outlined by, for example, the Debian Free Software Guidelines.
This is kind of a mess, resulting in part from the collision between the push for open access in academia and the older open source software movement. When I wrote forall x, back in 2005, most people could only understand it on analogy with open source software. Now, more people know about Creative Commons licenses. And CC licenses are just a better framework for licensing text than free software licenses are.
It’s important to note that there are at least two dimensions of ‘open’ which are getting conflated here.
There are lots of times that I find a reference or a link to a paper that looks like it could have something to do with a topic that I’m researching. If there is a readily-available version of the paper, then I read it. If it is in a closed-access journal, then I may check to see if I have access through my university library. Especially for recent or on-line first papers, the answer is often no.
At this point, I could request a copy by interlibrary loan or e-mail the author to ask for a copy. Sometimes I do these things, but only sometimes. There isn’t time to chase down copies of every possibly-relevant paper. So there are papers I never read that would be useful if I did look at them.
I used to feel guilty about this, but I’ve decided that I’m over it.
1. Publishers usually set the prices of philosophy books so as to exploit the market, rather than so as to maximize readership. I hate my publisher especially, but putting ideas in books often means sequestering them where they won’t be read.
2. Most philosophy is best done in journal articles, both for reasons of style and dissemination. Philosophy is no longer a discipline that requires a book for tenure. So the obvious response to 1 is just not to write books.
Nevertheless, there are still some projects that make sense as books rather than as articles. So what’s one to do?
3. For a textbook, I can offer it as an Open Education Resource. If it meets a need, other people will use it. And it can be acknowledged as legit after the fact.
4. For a monograph, I can share an unformatted draft in the same way I do for articles. This kind of self-archiving (Green OA) should be more common than it is, but that’s a rant for another post.
The thing I’m puzzling about is what alternatives there are for the published book itself.
5. This post felt like it should be a list of numbered points, even though it looks pretentious now that I’ve typed it out.
In his PhD thesis, Stijn Conix briefly considers the suggestion “that it does not make sense to think of values and epistemic standards as taking priority over each other.”1 In a footnote, he cites Matthew Brown “who refers to Magnus making a similar remark in personal communication.”
That’s cool, because I have made such a remark. I have a draft paper in which I defend it.
Frustratingly, today I got another rejection notice for that paper. I’ll take a day to cool off before looking at the referee comments again, and then I’ll decide on my next move. The most effective strategy for disseminating ideas might be to just talk to Matt Brown more often. Alas, that’s hard to document on my CV.
I haven’t read it all yet, but I enjoyed SooJin Lee’s piece on MoMA’s exhibition of the original emoji. Lee argues on institutional grounds that the exhibition is sufficient to make the original emoji count as art.
Several years ago, my colleague Jason D’Cruz and I set on the idea of writing something about Goodman’s autographic/allographic distinction. In the course of our discussions, he introduced me to Sol LeWitt’s wall drawings. I went down a rabbit hole of reading about them. I saw the exhibition at MassMOCA. I devised a wall drawing of my own.
The referee commented that this note could have appeared in a longer paper about conceptualism and the nature of art. It could have, perhaps, except that waiting on that longer paper to write itself would probably mean never publishing this bit.
At my old blog, I used to whinge every couple of years about whether my papers were getting longer or shorter as I got older. The gist was that there was a shallow upward trend. It’s been about five years. I have a short note forthcoming, so I’ve thought about it again. Here’s the updated scatter plot.
It occurs to me that there is a mistake in my previous post, but it can be patched up.
To review: Considerations of inductive or ampliative risk can make the difference between it being appropriate to believe something and it being inappropriate. If the stakes are high, then you might demand more evidence than if the stakes are low.
Schematically, what’s relevant are conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing P if it is false.
In cases of ampliative risk, the evidence does not overwhelmingly speak for or against. So the determination to believe or not depends in part on the stakes involved. I’ve typically put this in terms of conditional values: the benefit of believing P if it is true, the cost of believing P if it is false, the cost of not believing P if it is true, and the benefit of not believing if it is false. Heather Douglas calls this values playing an indirect role.
Implicit in this is that believing P if it is false is a cost. And so on. Ending up with accurate beliefs is generally good, and ending up with inaccurate beliefs is bad. What’s at issue is not the general valence of certain outcomes but instead their intensity.