This robot has read my work

Like lots of the internet, I’ve been playing around a bit with OpenAI’s ChatGPT. Lots of it is pretty mundane, because it avoids going too far off the rails. Often it either refuses to answer, giving an explanation that it is just a large language model designed to answer general knowledge questions.1

It can put together strings of text that have the air of fluency about them, but it is only tenuously tethered to the world.

I first asked about the Argument from Inductive Risk. ChatGPT (wrongly) answered that inductive risk is a kind of scepticism which implies that induction is “inherently unreliable” so that “that it is safer to rely on other forms of reasoning, such as deductive reasoning.”

I asked about the James-Rudner-Douglas thesis, which I expected it not to know about at all. The JRD thesis is a particular construal of inductive risk, and the phrase is really only used by me.2 Surprisingly, however, ChatGPT thinks it knows about the James-Rudner-Douglas thesis.

The James-Rudner-Douglas thesis is a philosophical principle that suggests that it is impossible to determine the truth or falsity of a statement or theory with complete certainty. This principle is based on the idea that all knowledge is inherently uncertain and that it is impossible to know anything with absolute certainty.

This is true, I guess, but not specific enough to distinguish it from lots of other theses. Then ChatGPT adds, delightfully:

The James-Rudner-Douglas thesis is named after the American philosopher and psychologist William James, the American philosopher Norman Rudner, and the British philosopher C. D. Broad, who each proposed versions of this principle in their work.

It’s Richard Rudner, actually. And the Douglas is Heather Douglas, who has perhaps never before been mistaken for C.D. Broad.

I initially thought that the free association to C.D. Broad must be because the “D.” stood for Douglas. Alas, I just checked: C.D. Broad’s middle name was Dunbar.

These kinds of errors and equivocations crop up quite regularly in its answers, stitched together with clear style and proper grammar. The result is kind of like a search engine that polishes off all the marks of author and origin which you might use to critically evaluate results.

I’ll conclude with this one, which is kind of wonderful:

Wittgenstein Poker is a thought experiment proposed by philosopher Ludwig Wittgenstein in his book “Philosophical Investigations”. In the thought experiment, two people are playing a game of poker, but instead of using cards, they are using words to make their bets. One person makes a bet by saying a word, and the other person must either accept the bet or challenge it. If the bet is challenged, the first person must provide a clear and concise definition of the word they used in their bet. The person with the best definition of the word wins the round.

A man and a woman sitting at a table with coins and cards on it, maybe playing a game? There is a block of nonsense text.
An image created by Midjourney AI, prompted with ChatGPT’s description of Wittgenstein Poker.
  1. In my head, the explanation is read in the voice of Phil Hartman as a riff on Keyrock the Unfrozen Caveman Lawyer explaining that he’s just a simple caveman who doesn’t my complex questions and modern technology.
  2. Initially in this paper, but then in almost every time I’ve written about inductive risk.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.