On writing and thinking

My forthcoming paper On trusting chatbots is centrally about the challenge of believing claims that appear in LLM output. I am sceptical about the prospects of AI-generated summaries of facts, but I also throw a bit of shade on the suggestion that AI should be used for brainstorming and conjuring up early drafts. Sifting through bullshit is not like editing in the usual sense, I suggest.

Nevertheless, I know people who advocate using chatbots for early drafts of formulaic things like work e-mails and formal proposals. That’s fine, I suppose, but only for the sorts of things where one might just as well find some boilerplate example on-line and use that as a starting place. For anything more original, there’s a real danger in letting a chatbot guide early writing.

Continue reading “On writing and thinking”


Via Daily Beast and Daily Nous: The administration at Boston University has made a number of tone-deaf suggestions for how faculty can juggle students while their graduate student TAs are on strike. Among these: “Engage generative AI tools to give feedback or facilitate ‘discussion’ on readings or assignments.”

Last year, I wrote that “there will be people who lose their jobs because of generative algorithms. This won’t be because they can be replaced, but instead because of rapacious capitalism. To put it in plainer terms, because their management is a bunch of dicks.”

Dots. Connected.

Imagination, philosophy, and imitation games

Via Daily Nous, I came across a blog post by Justin Smith-Ruiu about creative writing as philosophy. The post is, ultimately, an argument that philosophy can be “incitement of the imagination, by creative means, to see the world in unfamiliar ways.” I agree with that! But there are digressions along the way that range from false speculation to attacks on the kind of philosophy that I (sometimes) do.

Continue reading “Imagination, philosophy, and imitation games”

Engines of enshittification

Via Ars Technica, I’ve learned that shady Amazon sellers have been using chatbots to automatically write item descriptions. The result is hot offers on items like “I cannot fulfill that request” and “I apologize but I cannot complete this task.” This is a natural progression from Amazon product listings which were simply misdescribed by humans.

Continue reading “Engines of enshittification”


My paper Generative AI and photographic transparency now has a DOI and is on-line, occupying that liminal space of published but not quite which is characteristic of contemporary scholarship. The publisher has given me a link to the published version, but it won’t let you download or print it. (As always, you can grab the preprint from my website.)

Continue reading “Lincoln!”

Hot takes on new things

Like pretty much everybody else, I’ve been thinking about chatbots and generative AI. Unlike other things I write about, like scurvy, this is a hot topic. It’s hard to keep up using my usual strategy of rambling here on the blog, ruminating, and letting ideas simmer. Nevertheless, there are these two papers:

It took me years to write it

Fifteen years ago, I conducted a small study testing the error-correction tendency of Wikipedia. Not only is Wikipedia different now than it was then, the community that maintains it is different. Despite the crudity of that study’s methods, it is natural to wonder what the result would be now. So I repeated the earlier study and found surprisingly similar results.

That’s the abstract for a short paper of mine that was published today at First Monday. It is a follow-up to my earlier work on the epistemology of Wikipedia.

Continue reading “It took me years to write it”

LLMs have the wrong ontology for scholarship

I have read suggestions that LLMs might help with the routine and tedious parts of writing, like a literature review. This is undermined by their failure to distinguish the literature (which is to be reviewed) from discourse in general.

Continue reading “LLMs have the wrong ontology for scholarship”