Exchanging Marx for Lincoln

I just posted a draft of Generative AI and Photographic Transparency, a short paper that is about those things. It builds on two blog posts that I wrote a while ago, but fleshes out the discussion in several respects. Whereas the blog posts used pictures of Karl Marx as their specimen example, the paper instead considers pictures of Abraham Lincoln. The change lets me work in some quotes from William James and Oliver Wendell Holmes.

It is still a draft, so comments are welcome.

Generative AI and homogenization

Among the legitimate worries about Large Language Models is that they will homogenize diverse voices.1 As more content is generated by LLMs, the generic style of LLM output will provide exemplars to people finding their own voices. So even people who write for themselves will learn to write like machines.

Continue reading “Generative AI and homogenization”

Generative AI and rapacious capitalism

Some people have claimed that Large Language Models like ChatGPT will do for wordsmiths like me what automation has been doing to tradesfolk for centuries. They’re wrong. Nevertheless, there will be people who lose their jobs because of generative algorithms. This won’t be because they can be replaced, but instead because of rapacious capitalism. To put it in plainer terms, because their management is a bunch of dicks.

Continue reading “Generative AI and rapacious capitalism”

AI is a joke

The genre of post that echoes an interaction with ChatGPT is stale and tedious. As Tom Scott comments, “Telling someone about your fascinating AI conversation is like telling someone about your dreams. They don’t care, it just sounds like you’re hallucinating nonsense.” I swore back in December that I wouldn’t make another post like that, but this one has jokes.

Continue reading “AI is a joke”

Network theology

There’s a rush now at Google and Microsoft to incorporate chatbots into their search engines, which seems apt to undercut the usefulness of search engines.

Suppose you want a list of Ruben Bolling’s God-Man comics. The words that a large-language model will string together about God-Man, based on having taking the whole internet as its corpus, will be nonspecific, incomplete, and probably error-ridden. The best source on this topic would be an official webpage or a fan page. There is no official God-Man episode guide, but I happen to maintain what is probably the only God-Man fan page. The best answer to the query is not to digest my page along with ten thousand others, but just to link to my page.

For lots of perfectly ordinary queries, one wants a dedicated webpage or site rather than the word-salad gestalt of a general purpose tool. So a traditional search engine can provide answers that a chatbot will only obscure.

This post was prompted by OmnipotentFriends, the latest adventure of God-Man and the cause of an update to the fan page.

AI can’t let me see Karl Marx

Back in September, I wrote a post about generative AI and photographic transparency. The gist of it was this: Kendall Walton famously argued that I actually see Karl Marx when I look at a photograph of him, in a way I don’t when I look at a painting. The painting is mediated by the beliefs of the painter in a way that the photograph is not mediated by the photographer’s beliefs. So, I asked, what about an AI-generated image of Marx?

As I said in a footnote to that post, I wasn’t very happy with my answer to the question. As it happens, my Philosophy of Art class got interested in photographic transparency all on their own. So I made a mid-semester adjustment, added it to the syllabus, reread the Walton essay, and taught it to students in October. It turns out there was a part of the essay that I had forgotten when I wrote my post in September, and Walton gives us the resources for a better answer to the puzzle of AI-generated images.

Continue reading “AI can’t let me see Karl Marx”

My last post playing with ChatGPT

I was a bit chuffed that ChatGPT knows about the JRD thesis, and then I whinged about the fact that it confabulates like mad. Turns out the former is just an instance of the latter.1

When asked cold about the James-Rudner-Douglas thesis, it denies knowing that there is such a thing. However, I was able to reconstruct the path that got me to the answers that I discussed in my earlier post: Ask about the Argument from Inductive Risk first in the same conversation, and it reports confidently about the thesis.

Continue reading “My last post playing with ChatGPT”

This robot confabulates like a human

As a philosopher, I am often asked about the nature of truth. What is truth? How do we know what is true? These are questions that have puzzled philosophers for centuries, and they continue to be the subject of intense debate and discussion.

Eric Schwitzgebel has gotten GPT-3 to write blog posts in his style, so I asked OpenAI’s ChatGPT to write a blog post in my style— prompted explicitly as “in the style of P.D. Magnus.” It led with the opening that I’ve quoted above, followed by descriptions of correspondence and coherence theories of truth.

When asked who P.D. Magnus is, however, it replies, “I’m sorry, but I am not familiar with a person named P.D. Magnus.” At least, most of the time. One prompt generated, “P.D. Magnus is a fictional character and does not exist in reality.”1

Continue reading “This robot confabulates like a human”