This robot has read my work

Like lots of the internet, I’ve been playing around a bit with OpenAI’s ChatGPT. Lots of it is pretty mundane, because it avoids going too far off the rails. Often it either refuses to answer, giving an explanation that it is just a large language model designed to answer general knowledge questions.1

It can put together strings of text that have the air of fluency about them, but it is only tenuously tethered to the world.

I first asked about the Argument from Inductive Risk. ChatGPT (wrongly) answered that inductive risk is a kind of scepticism which implies that induction is “inherently unreliable” so that “that it is safer to rely on other forms of reasoning, such as deductive reasoning.”

I asked about the James-Rudner-Douglas thesis, which I expected it not to know about at all. The JRD thesis is a particular construal of inductive risk, and the phrase is really only used by me.2 Surprisingly, however, ChatGPT thinks it knows about the James-Rudner-Douglas thesis.

Continue reading “This robot has read my work”

Robot overlords win blue ribbon (not really)

I’m teaching Philosophy of Art this semester, and a student pointed me to an Ars Technica story with the headline AI wins state fair art contest, annoys humans. Jason Allen used Midjourney (the same AI that I was playing with recently) to make some images and enter them in the Colorado State Fair art contest. One of those images won first place in the Digital Arts/Digitally Manipulated Photography category.

There’s lots of discussion about whether this is the end for human artists (it’s not), whether this shows that AI are now making real art (no), and whether the submission of AI-generated images to the State Fair was dishonest (maybe).

Continue reading “Robot overlords win blue ribbon (not really)”

I for one welcome our new robot watercolorists

Today I discovered Midjourney, an AI image generator that’s rather more high-powered than other free ones which I’ve tried. Riffing on prompts by other users, I had it generate a Frank frazetta painting of a wombat newscaster. The result was pleasant enough that I’ve adopted it— for now— as the blog header.

ai image of a wombat newscaster

Two links about AI

There are some articles that I read and think I ought to blog about that. Then I realize that I basically have. So this is basically a link dump kind of post.

Link #1: Geoffrey Hinton cautions that deep learning is not especially deep

I’ve written some posts about the glitzy fad for “deep learning”. It has the same strengths and weaknesses it had when it traveled under the less-shiny banner of “back-propagation neural networks”.

Link #2: Efforts to understand the bias inherent in algorithms

Procedures that are superficially objective can encode bias. I don’t have anything deep to say here, but I’ve blogged about it before.

AI ai ai

I recently commented on the fact that machine learning with neural networks now regularly gets called “AI”. I find the locution perplexing, because these machine learning problems have success conditions set up by engineers who defined the inputs and outputs.

Here is another headline which doubles down on the locution, discussing AIs creating AIs. Yet having a neural network solve an optimization problem is still machine learning in a constrained and specified problem space, even if it’s optimizing the structure of other neural networks.

Brave new age of robot overlords this ain’t.

Continue reading “AI ai ai”