This robot confabulates like a human

As a philosopher, I am often asked about the nature of truth. What is truth? How do we know what is true? These are questions that have puzzled philosophers for centuries, and they continue to be the subject of intense debate and discussion.

Eric Schwitzgebel has gotten GPT-3 to write blog posts in his style, so I asked OpenAI’s ChatGPT to write a blog post in my style— prompted explicitly as “in the style of P.D. Magnus.” It led with the opening that I’ve quoted above, followed by descriptions of correspondence and coherence theories of truth.

When asked who P.D. Magnus is, however, it replies, “I’m sorry, but I am not familiar with a person named P.D. Magnus.” At least, most of the time. One prompt generated, “P.D. Magnus is a fictional character and does not exist in reality.”1

Asked who Eric Schwitzgebel is, ChatGPT provides a few paragraphs of information. Some of it is true, although it says that he went to grad school at UC San Diego. (He did not.)

This highlights the fact that the large language model casually stitches together facts and false associations more or less indifferently, all posed with the same plausible delivery. It seems especially bad on things which are discussed a bit (so that it crosses the model’s threshold of confidence) but which are not much-discussed (so its associations are apt to be wrong).2

One way of highlighting this is to ask it about the fallacies on my list of bespoke informal fallacies. These are all taken from sources who used them as one-off condemnation of someone’s reasoning. So they probably exist in the model’s training set, but they aren’t common enough that the model is likely to have learned the actual meaning.

It scores some successes: The Jedi Master fallacy, the argumentum ad Gaulum, and the Buzz Aldrin fallacy are sufficiently obscure that ChatGPT responds that it doesn’t know what they are. It gets the reductio ad Hitlerum right and even explains the Latin etymology.

It gets many of the rest wrong, often providing the opposite of the intended meaning. For example, it gives wrong answers for the boy’s club fallacy, the pleonastic fallacy, the Narcissian fallacy, the sentimentalist fallacy, and the Wittgenstein fallacy.3

The boy’s club fallacy was coined to describe the reasoning that a group isn’t sexist because it invites new members based on social connections and not explicitly on gender when the social connections are not equitably distributed. ChatGPT provides a three paragraph answer that leads with the claim that the fallacy is “the mistaken belief that people are more competent or qualified based on their gender or membership in a particular group.” In the third paragraph, it offers a cautionary note while repeating its mistaken definition:

It is important to note that the term “boy’s club fallacy” is not a well-defined or widely-recognized concept in philosophy or psychology. It is often used as a descriptive term to refer to the mistake of assuming that people are more competent or qualified based on their gender or membership in a particular group, but it is not a term that is commonly used by scholars or researchers.

It echoes this construction in its answers about several other fallacies, saying at once that the phrase is “often used” in such-and-so way but “not…commonly used by scholars.” Often used among whom, then?

When I ask again a bit later, its answers have a different form. It leads with the claim that it is “not aware of any well-known or widely-recognized fallacy that is called the ‘boy’s club fallacy'” and that it does “not have any information about what this term might refer to or what it might be used for.” It goes on to give the same answer as before, however, describing it as “purely speculative.”

  1. Woah.
  2. The title of this post is ChatGPT’s suggestion for how to finish the sentence “This robot confabulates like a…”
  3. My blog gets a small but consistent amount of traffic visiting posts where I’ve discussed additions to the list. This blog is a go-to site for information about the pleonastic fallacy.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.