The University at Albany is hiring dozens of faculty across the disciplines to launch the UAlbany AI Institute. One of those lines is a tenure-track position in Philosophy.
As part of this enormous cluster hire, the schedule is not the usual thing. The JFP ad just went live today, and we’ll begin review of applications January 12— so the search is open for just over a month. And the job starts Fall 2023.
Note that, although the new person will need to teach AI Ethics, this is not specifically an Ethics search. It is in Philosophy of AI broadly construed, which includes relevant value theory but also philosophy of mind and philosophy of science.
If that’s you, please apply. If you know someone who would be a good candidate, please encourage them to apply.
I was a bit chuffed that ChatGPT knows about the JRD thesis, and then I whinged about the fact that it confabulates like mad. Turns out the former is just an instance of the latter.
When asked cold about the James-Rudner-Douglas thesis, it denies knowing that there is such a thing. However, I was able to reconstruct the path that got me to the answers that I discussed in my earlier post: Ask about the Argument from Inductive Risk first in the same conversation, and it reports confidently about the thesis.
Continue reading “My last post playing with ChatGPT”
Regarding A Philosophy of Cover Songs:
The book is philosophically rich, engaging, and loaded with illuminating examples. It is worthy of sustained scholarly attention, but also accessible enough for a general audience. It would be an excellent book to adopt in any undergraduate course (at any level) on aesthetics and the philosophy of art, or in any introductory philosophy course with units on those topics— and not only because students can read it for free.
This is from Brandon’s review at the Journal of Aesthetics and Art Criticism. As far as I know, this is the first published review of the book.
As a philosopher, I am often asked about the nature of truth. What is truth? How do we know what is true? These are questions that have puzzled philosophers for centuries, and they continue to be the subject of intense debate and discussion.
Eric Schwitzgebel has gotten GPT-3 to write blog posts in his style, so I asked OpenAI’s ChatGPT to write a blog post in my style— prompted explicitly as “in the style of P.D. Magnus.” It led with the opening that I’ve quoted above, followed by descriptions of correspondence and coherence theories of truth.
When asked who P.D. Magnus is, however, it replies, “I’m sorry, but I am not familiar with a person named P.D. Magnus.” At least, most of the time. One prompt generated, “P.D. Magnus is a fictional character and does not exist in reality.”
Continue reading “This robot confabulates like a human”
Like lots of the internet, I’ve been playing around a bit with OpenAI’s ChatGPT. Lots of it is pretty mundane, because it avoids going too far off the rails. Often it either refuses to answer, giving an explanation that it is just a large language model designed to answer general knowledge questions.
It can put together strings of text that have the air of fluency about them, but it is only tenuously tethered to the world.
I first asked about the Argument from Inductive Risk. ChatGPT (wrongly) answered that inductive risk is a kind of scepticism which implies that induction is “inherently unreliable” so that “that it is safer to rely on other forms of reasoning, such as deductive reasoning.”
I asked about the James-Rudner-Douglas thesis, which I expected it not to know about at all. The JRD thesis is a particular construal of inductive risk, and the phrase is really only used by me. Surprisingly, however, ChatGPT thinks it knows about the James-Rudner-Douglas thesis.
Continue reading “This robot has read my work”
We can distinguish three different approaches in philosophy of art. They have consequences for art ontology, appreciation, and the nature of genre.
TL;DR: Some stuff about norms. The third approach, social practicism, needs a better name.
Continue reading “Social practicism about art”
Satan said unto the Lord, “Hey, God. I was wondering…”
God’s attention turned, and Satan continued: “I know that you’re omnipotent, but I was wondering if you could create a world that had some good things in it, but also an overwhelming amount of toil, suffering, and evil.”
The Lord replied, “Yes, I could do that.”
“But could you, really?” asked Satan, stretching out the final word.
“Look, Satan,” said God, “you’ve agreed that I am omnipotent. That word just means all powerful. An omnipotent god can do anything.”
The Lord added, with the clarity and force of proof, “You’ve mentioned a thing to do. I am omnipotent. So I could do it. QED.”
Continue reading “Prequel to Genesis”
I regularly teach a course called Understanding Science, an introduction to some issues in philosophy of science and science studies. One topic is the nature of inference: deduction, the fact that scientific inference is (largely) non-deductive, and the problem of induction.
Continue reading “What to call the fact that science traffics in assumptions?”