Norms of science in present times

Last week I taught an essay by sociologist Robert Merton which I first read almost 30 years ago. Originally published in 1942, the essay is about the institutional norms of science in the context of broader society. He identifies several norms of science and suggests that they fit with the norms democracy.1

Although I have taught it something like 20 times before, when reading it through this semester, the opening paragraph hit me with a currency that it has never had before.

Continue reading “Norms of science in present times”

How science informs philosophy

At the Blog of the APA, Nina Emery discusses the relation between philosophy and science. I want to discuss what she calls

Content Naturalism. Philosophers ought not put forward theories that conflict with the content of our best scientific theories.

This is close to a kind of philosophical conservatism according to which “philosophy cannot credibly challenge… the established theses of the natural sciences…”2

In that stark form, there are at least two problems with it.

Continue reading “How science informs philosophy”

Updated drafts with ten different coauthors

Updated drafts posted in the last few weeks:

* Who’s sorry now: User preferences among Rote, Empathic, and Explanatory apologies from LLM chatbots, with Zahra Ashktorab, Alessandra Buccella, Jason D’Cruz, Zoë Fowler, Andrew Gill, Kei Leung, John Richards, and Kush R. Varshney3

* Chatbot apologies: Beyond bullshit, with Alessandra Buccella and Jason D’Cruz

* Music genres as historical individuals, with Emmie Malone and Brandon Polite

Higher-order bullshit

Three snapshot applications of AI:

During early phases of the war in Gaza, the Israeli military used software to select bombing targets on a scale that would not have been possible for human analysts.4

During the Trump administration’s initial attack on the federal government, there was lots of nonsense about how Elon Musk and DOGE were using software to identify waste. House Speaker Mike Johnson commented that Musk has “created these algorithms that are constantly crawling through the data, and… the data doesn’t lie.”5

Health Secretary Robert F. Kennedy Jr. commissioned a report with the ridiculous title “Make America Healthy Again.”6 It turns out that many of the citations in the study are erroneous, including references to articles which simply do not exist. Incorrectly citing things and misrepresenting results is plausibly human malfeasance or incompetence, but totally inventing sources suggests chatbot hallucination.

The wrongs committed here are morally different, and I don’t want to suggest a false equivalence. But each of these cases provides a specimen of how reliance on AI has been used to further dangerous agendas. Yet the reliance on AI is really just a sideshow.

Continue reading “Higher-order bullshit”

Fun with fallacies

A couple of months ago, I made note of the neosemantic fallacy: “the magic of neologisms, which encourage [one] to infer that a new word refers to a new kind of thing.”7

I realized yesterday that it was just a flavor of the fallacy of reification. JS Mill characterized this as the tendency “to believe that whatever received a name must be an entity or thing, having an independent existence of its own.”

The fact that giving reification a new name made me think of it as a distinct fallacy means that I committed the neosemantic fallacy in my earlier post. So, although it’s not the autological fallacy, it is an autological fallacy.8

“On trusting chatbots” is live

My paper On Trusting Chatbots is now published at Episteme. It is in the penumbral zone of publication, with a version of record and a DOI but without appearing yet in an issue.

Publishing things on-line is a good thing. Waiting for space in a print issue is a holdover from the 20th-century. But it creates the awkward situation where the paper will be cited now as Magnus 2025 but, if it doesn’t get into an issue this year, cited in the future as Magnus 202x (for some x≠5).

If we care about careful and accurate citation, there’s got to be a better way.

Doctor gpt

At Daily Nous, there’s discussion of Rebecca Lowe’s post about how great it is to talk philosophy with the latest version of Chat GPT.

There’s pushback in the comments. Others reply that the critics haven’t used the latest version (which is only available behind a paywall). Discussion of LLMs will always allow this: Complaints about their shortcomings are answered by pointing to the next version that’s supposed to resolve all the issues.

Lowe and other commenters reveal that lots of philosophers are using LLMs in the regular day-to-day of their research. I’m still trying to figure out what I think about that. For now, let’s deflect Lowe’s ridiculous offhand claim that “Gpt could easily get a PhD on any philosophical topic.” I say ridiculous for a few reasons—

Continue reading “Doctor gpt”