When asked cold about the James-Rudner-Douglas thesis, it denies knowing that there is such a thing. However, I was able to reconstruct the path that got me to the answers that I discussed in my earlier post: Ask about the Argument from Inductive Risk first in the same conversation, and it reports confidently about the thesis.Continue reading “My last post playing with ChatGPT”
As a philosopher, I am often asked about the nature of truth. What is truth? How do we know what is true? These are questions that have puzzled philosophers for centuries, and they continue to be the subject of intense debate and discussion.
Eric Schwitzgebel has gotten GPT-3 to write blog posts in his style, so I asked OpenAI’s ChatGPT to write a blog post in my style— prompted explicitly as “in the style of P.D. Magnus.” It led with the opening that I’ve quoted above, followed by descriptions of correspondence and coherence theories of truth.
When asked who P.D. Magnus is, however, it replies, “I’m sorry, but I am not familiar with a person named P.D. Magnus.” At least, most of the time. One prompt generated, “P.D. Magnus is a fictional character and does not exist in reality.”1Continue reading “This robot confabulates like a human”
Like lots of the internet, I’ve been playing around a bit with OpenAI’s ChatGPT. Lots of it is pretty mundane, because it avoids going too far off the rails. Often it either refuses to answer, giving an explanation that it is just a large language model designed to answer general knowledge questions.1
It can put together strings of text that have the air of fluency about them, but it is only tenuously tethered to the world.
I first asked about the Argument from Inductive Risk. ChatGPT (wrongly) answered that inductive risk is a kind of scepticism which implies that induction is “inherently unreliable” so that “that it is safer to rely on other forms of reasoning, such as deductive reasoning.”
I asked about the James-Rudner-Douglas thesis, which I expected it not to know about at all. The JRD thesis is a particular construal of inductive risk, and the phrase is really only used by me.2 Surprisingly, however, ChatGPT thinks it knows about the James-Rudner-Douglas thesis.Continue reading “This robot has read my work”
After recent posts about AI image generation, I’ve been mulling over whether there are any interesting philosophical lessons. Despite the title, this is not a post about AI and the twilight of capitalism. Instead, it’s about the would-be transparency of images.1Continue reading “Can AI let me see Karl Marx?”
I’m teaching Philosophy of Art this semester, and a student pointed me to an Ars Technica story with the headline AI wins state fair art contest, annoys humans. Jason Allen used Midjourney (the same AI that I was playing with recently) to make some images and enter them in the Colorado State Fair art contest. One of those images won first place in the Digital Arts/Digitally Manipulated Photography category.
There’s lots of discussion about whether this is the end for human artists (it’s not), whether this shows that AI are now making real art (no), and whether the submission of AI-generated images to the State Fair was dishonest (maybe).Continue reading “Robot overlords win blue ribbon (not really)”
Several years ago, my colleague Jason D’Cruz and I set on the idea of writing something about Goodman’s autographic/allographic distinction. In the course of our discussions, he introduced me to Sol LeWitt’s wall drawings. I went down a rabbit hole of reading about them. I saw the exhibition at MassMOCA. I devised a wall drawing of my own.
But our work went in other directions, and we didn’t publish anything about LeWitt or about wall drawings. After a reading group this summer, he commented that this was a shame. So I sent off a short item which has now appeared in Contemporary Aesthetics: That Some of Sol Lewitt’s Later Wall Drawings Aren’t Wall Drawings
The referee commented that this note could have appeared in a longer paper about conceptualism and the nature of art. It could have, perhaps, except that waiting on that longer paper to write itself would probably mean never publishing this bit.
There are some articles that I read and think I ought to blog about that. Then I realize that I basically have. So this is basically a link dump kind of post.
I’ve written some posts about the glitzy fad for “deep learning”. It has the same strengths and weaknesses it had when it traveled under the less-shiny banner of “back-propagation neural networks”.
Procedures that are superficially objective can encode bias. I don’t have anything deep to say here, but I’ve blogged about it before.
I recently commented on the fact that machine learning with neural networks now regularly gets called “AI”. I find the locution perplexing, because these machine learning problems have success conditions set up by engineers who defined the inputs and outputs.
Here is another headline which doubles down on the locution, discussing AIs creating AIs. Yet having a neural network solve an optimization problem is still machine learning in a constrained and specified problem space, even if it’s optimizing the structure of other neural networks.
Brave new age of robot overlords this ain’t.
There is a lot of buzz about AI and the prospect that computers will soon be doing something hugely different than what they’re doing now. It’s apprehension of what Ray Kurzweil calls the singularity, except that people don’t call it that much anymore.
Under the headline An AI invented a bunch of new paint colors that are hilariously wrong, Annalee Newtiz discusses the result of training neural networks on Sherwin-Williams decorator colour names. The original work was done by Janelle Shane, who recounted it in a Tumblr post.
Some whinging below the fold.
[N]early all of the communication calls of the Egyptian fruit bat in the roost are emitted during aggressive pairwise interactions, involving squabbling over food or perching locations and protesting against mating attempt.
Using algorithms, researchers were able to discern differences in bat griping depending both on who the target bat was (who was being griped with) and the context (what the griping was about).
I have argued that, for a domain of enquiry that includes meerkats in their natural environment, different meerkat alarm calls and the classes of threats which elicit them comprise natural kinds (see ch4). That admits six kinds, because there are three different calls and three corresponding classes of threats.
There’s no reason why the argument doesn’t generalize. For fruit bat groups in their environment, there may well be natural kinds corresponding to distinct classes of vocalizations and to the classes of objects picked out by those vocalizations. But what if it turns out that bat reference to individual other bats uses sounds functioning in the fashion of proper names? Suppose there’s an individual bat that the other bats pick out with a specific squeeky sound, something like “leeko leeko leeko”. Does that individual bat count as a natural kind?
One might think the answer has to be no, because kinds and individuals are different ontological categories. I’m not tempted by that, however. As I argue, species might turn out to be continuous individuals (in their fundamental ontology) but still count as natural kinds (see ch6).
Nevertheless, the category for the specific bat leeko could only be a natural kind for the domain including that specific bat population. And it might lack enough general importance to be a natural kind for a domain that includes all the Egyptian fruit bat populations across both space and time. So my account doesn’t require the answer to be yes.
Moreover, it’s not clear to me from the recent report whether the distinctions between bat vocalizations are clear and sharp enough to count as natural kinds. As always, the answer will depend on the details.