AI can’t let me see Karl Marx

Back in September, I wrote a post about generative AI and photographic transparency. The gist of it was this: Kendall Walton famously argued that I actually see Karl Marx when I look at a photograph of him, in a way I don’t when I look at a painting. The painting is mediated by the beliefs of the painter in a way that the photograph is not mediated by the photographer’s beliefs. So, I asked, what about an AI-generated image of Marx?

As I said in a footnote to that post, I wasn’t very happy with my answer to the question. As it happens, my Philosophy of Art class got interested in photographic transparency all on their own. So I made a mid-semester adjustment, added it to the syllabus, reread the Walton essay, and taught it to students in October. It turns out there was a part of the essay that I had forgotten when I wrote my post in September, and Walton gives us the resources for a better answer to the puzzle of AI-generated images.

Continue reading “AI can’t let me see Karl Marx”

My last post playing with ChatGPT

I was a bit chuffed that ChatGPT knows about the JRD thesis, and then I whinged about the fact that it confabulates like mad. Turns out the former is just an instance of the latter.1

When asked cold about the James-Rudner-Douglas thesis, it denies knowing that there is such a thing. However, I was able to reconstruct the path that got me to the answers that I discussed in my earlier post: Ask about the Argument from Inductive Risk first in the same conversation, and it reports confidently about the thesis.

Continue reading “My last post playing with ChatGPT”

This robot confabulates like a human

As a philosopher, I am often asked about the nature of truth. What is truth? How do we know what is true? These are questions that have puzzled philosophers for centuries, and they continue to be the subject of intense debate and discussion.

Eric Schwitzgebel has gotten GPT-3 to write blog posts in his style, so I asked OpenAI’s ChatGPT to write a blog post in my style— prompted explicitly as “in the style of P.D. Magnus.” It led with the opening that I’ve quoted above, followed by descriptions of correspondence and coherence theories of truth.

When asked who P.D. Magnus is, however, it replies, “I’m sorry, but I am not familiar with a person named P.D. Magnus.” At least, most of the time. One prompt generated, “P.D. Magnus is a fictional character and does not exist in reality.”1

Continue reading “This robot confabulates like a human”

This robot has read my work

Like lots of the internet, I’ve been playing around a bit with OpenAI’s ChatGPT. Lots of it is pretty mundane, because it avoids going too far off the rails. Often it either refuses to answer, giving an explanation that it is just a large language model designed to answer general knowledge questions.1

It can put together strings of text that have the air of fluency about them, but it is only tenuously tethered to the world.

I first asked about the Argument from Inductive Risk. ChatGPT (wrongly) answered that inductive risk is a kind of scepticism which implies that induction is “inherently unreliable” so that “that it is safer to rely on other forms of reasoning, such as deductive reasoning.”

I asked about the James-Rudner-Douglas thesis, which I expected it not to know about at all. The JRD thesis is a particular construal of inductive risk, and the phrase is really only used by me.2 Surprisingly, however, ChatGPT thinks it knows about the James-Rudner-Douglas thesis.

Continue reading “This robot has read my work”

Robot overlords win blue ribbon (not really)

I’m teaching Philosophy of Art this semester, and a student pointed me to an Ars Technica story with the headline AI wins state fair art contest, annoys humans. Jason Allen used Midjourney (the same AI that I was playing with recently) to make some images and enter them in the Colorado State Fair art contest. One of those images won first place in the Digital Arts/Digitally Manipulated Photography category.

There’s lots of discussion about whether this is the end for human artists (it’s not), whether this shows that AI are now making real art (no), and whether the submission of AI-generated images to the State Fair was dishonest (maybe).

Continue reading “Robot overlords win blue ribbon (not really)”

I for one welcome our new robot watercolorists

Today I discovered Midjourney, an AI image generator that’s rather more high-powered than other free ones which I’ve tried. Riffing on prompts by other users, I had it generate a Frank frazetta painting of a wombat newscaster. The result was pleasant enough that I’ve adopted it— for now— as the blog header.

ai image of a wombat newscaster

Two links about AI

There are some articles that I read and think I ought to blog about that. Then I realize that I basically have. So this is basically a link dump kind of post.

Link #1: Geoffrey Hinton cautions that deep learning is not especially deep

I’ve written some posts about the glitzy fad for “deep learning”. It has the same strengths and weaknesses it had when it traveled under the less-shiny banner of “back-propagation neural networks”.

Link #2: Efforts to understand the bias inherent in algorithms

Procedures that are superficially objective can encode bias. I don’t have anything deep to say here, but I’ve blogged about it before.

AI ai ai

I recently commented on the fact that machine learning with neural networks now regularly gets called “AI”. I find the locution perplexing, because these machine learning problems have success conditions set up by engineers who defined the inputs and outputs.

Here is another headline which doubles down on the locution, discussing AIs creating AIs. Yet having a neural network solve an optimization problem is still machine learning in a constrained and specified problem space, even if it’s optimizing the structure of other neural networks.

Brave new age of robot overlords this ain’t.

Continue reading “AI ai ai”