As a philosopher, I am often asked about the nature of truth. What is truth? How do we know what is true? These are questions that have puzzled philosophers for centuries, and they continue to be the subject of intense debate and discussion.
Eric Schwitzgebel has gotten GPT-3 to write blogposts in his style, so I asked OpenAI’s ChatGPT to write a blog post in my style— prompted explicitly as “in the style of P.D. Magnus.” It led with the opening that I’ve quoted above, followed by descriptions of correspondence and coherence theories of truth.
When asked who P.D. Magnus is, however, it replies, “I’m sorry, but I am not familiar with a person named P.D. Magnus.” At least, most of the time. One prompt generated, “P.D. Magnus is a fictional character and does not exist in reality.”1
I’ve found that evil usually triumphs unless good is very, very careful.
I posted yesterday about what I called the Positive Buzz fallacy:
Activity z is the best way to accomplish goal y.
Therefore, activity z is the best way to accomplish goals.
I realized today that it is closely related to a fallacy that people often commit in misunderstanding natural selection: An organism is fittest in a given environment, and the fallacifier infers that it’s simply best.
Over at the Blog of the APA, Mark Satta coins a new fallacy.1 He calls it the Buzz Aldrin fallacy, riffing on a quotation he attributes the astronaut:
When President Kennedy wanted to get to the moon, he didn’t invite poets and philosophers to the White House, he called upon scientists and engineers. That’s how you get stuff done.
This shifts from the obvious (that philosophers and poets don’t do the detail work of building rockets) to a sweeping claim (that calling scientists and engineers is what you do when you want to get stuff done). Satta supplies an implicature, reads it as an inference, and extrapolates a general pattern:
Activity x does not contribute to goal y.
Therefore, activity x is not valuable.
This is a fallacy, which he names and talks about. But here’s another fallacious pattern closer to the surface of the quotation:
Via Daily Nous, I encountered two new informal fallacies. Keith Payne, Laura Niemi, and John Doris coin them in writing about implicit bias at Scientific American.
the divining rod fallacy: On the basis of an instrument or scale for measurement being problematic, inferring that the property which it measures is not real. “[J]ust because a rod doesn’t find water doesn’t mean there’s no such thing as water.”
the palm reading fallacy: Expecting psychological or sociological phenomena which occur at the group level to yield predictions about particular group members. “[U]nlike palm readers, research psychologists aren’t usually in the business of telling you, as an individual, what your life holds in store.”
Since February 1999, I’ve had a web page about fallacies. Rather than regurgitating all of the usual ones that one can find elaborated in critical thinking textbooks, I collect fallacies which an author names for just one occasion. These one-offs don’t appear on the usual lists. Authors usually do this to condemn some specific target, one who has committed not some generic error in reasoning but the specific if newly-named fallacy of such-and-so.