Everyone is risk-averse; and not

Over at Daily Nous, there’s discussion of an argument by John Symons that academic philosophy has become more risk-averse. Symons writes that

…the current incentive structure of academic philosophy in the United States favors cautious and modest research agendas for early career philosophers. Philosophical inquiry thrives when it is conducted in a spirit that risks overreaching a bit and welcomes criticism. Philosophy thrives when its creative, skeptical, and self-critical core is not subordinated to excessively cautious American-style professionalism…

The trick, though, is that any system where there are more researchers than jobs must hire some of them but not others. This creates an incentive structure for scholars to do work that tends to get people hired.

In a system which rewards provocative overreaching, a risk-averse young scholar who wanted to maximize their chance of a job would portray their work as provocative overreach. This would probably mean not citing too much other work (so as not to make one’s project look too in-the-box) and, when citing things, doing so only with disapproval (so as to suggest that one is against the system). Someone who is genuinely interested in formalizing the context sensitivity of epistemic modals would have to include a fiery introduction that explains why their work sticks it to the man.

Ultimately, risk-aversion can arise in any incentive structure.

Moreover, no young philosophers are truly risk-averse. They have spent years of their life pursuing a PhD in Philosophy, which is a risky proposition regardless of the research topic. The genuinely risk averse young adults didn’t become philosophers.

“Transcendentalism” and other free stuff

In my Pragmatism course, I spend a week on the transcendentalists before getting to pragmatism as such. Setting things up this time, I realized that I’m still using the PDF of Theodore Parker’s “Transcendentalism” lecture which I scanned back in 2003. It’s not pretty. Since the essay is in the public domain, there should be something better. So I started from the OCR text, cleaned it up, set it up in LaTeX, and generated a nice PDF.

I have posted it as part of my repository of free texts in pragmatism and American philosophy.

Continue reading ““Transcendentalism” and other free stuff”

AI can’t let me see Karl Marx

Back in September, I wrote a post about generative AI and photographic transparency. The gist of it was this: Kendall Walton famously argued that I actually see Karl Marx when I look at a photograph of him, in a way I don’t when I look at a painting. The painting is mediated by the beliefs of the painter in a way that the photograph is not mediated by the photographer’s beliefs. So, I asked, what about an AI-generated image of Marx?

As I said in a footnote to that post, I wasn’t very happy with my answer to the question. As it happens, my Philosophy of Art class got interested in photographic transparency all on their own. So I made a mid-semester adjustment, added it to the syllabus, reread the Walton essay, and taught it to students in October. It turns out there was a part of the essay that I had forgotten when I wrote my post in September, and Walton gives us the resources for a better answer to the puzzle of AI-generated images.

Continue reading “AI can’t let me see Karl Marx”

UAlbany hiring in Philosophy of AI

The University at Albany is hiring dozens of faculty across the disciplines to launch the UAlbany AI Institute. One of those lines is a tenure-track position in Philosophy.

As part of this enormous cluster hire, the schedule is not the usual thing. The JFP ad just went live today, and we’ll begin review of applications January 12— so the search is open for just over a month. And the job starts Fall 2023.

Note that, although the new person will need to teach AI Ethics, this is not specifically an Ethics search. It is in Philosophy of AI broadly construed, which includes relevant value theory but also philosophy of mind and philosophy of science.

If that’s you, please apply.1 If you know someone who would be a good candidate, please encourage them to apply.

My last post playing with ChatGPT

I was a bit chuffed that ChatGPT knows about the JRD thesis, and then I whinged about the fact that it confabulates like mad. Turns out the former is just an instance of the latter.1

When asked cold about the James-Rudner-Douglas thesis, it denies knowing that there is such a thing. However, I was able to reconstruct the path that got me to the answers that I discussed in my earlier post: Ask about the Argument from Inductive Risk first in the same conversation, and it reports confidently about the thesis.

Continue reading “My last post playing with ChatGPT”

😊

Regarding A Philosophy of Cover Songs:

The book is philosophically rich, engaging, and loaded with illuminating examples. It is worthy of sustained scholarly attention, but also accessible enough for a general audience. It would be an excellent book to adopt in any undergraduate course (at any level) on aesthetics and the philosophy of art, or in any introductory philosophy course with units on those topics— and not only because students can read it for free.

Brandon Polite

This is from Brandon’s review at the Journal of Aesthetics and Art Criticism. As far as I know, this is the first published review of the book.

This robot confabulates like a human

As a philosopher, I am often asked about the nature of truth. What is truth? How do we know what is true? These are questions that have puzzled philosophers for centuries, and they continue to be the subject of intense debate and discussion.

Eric Schwitzgebel has gotten GPT-3 to write blog posts in his style, so I asked OpenAI’s ChatGPT to write a blog post in my style— prompted explicitly as “in the style of P.D. Magnus.” It led with the opening that I’ve quoted above, followed by descriptions of correspondence and coherence theories of truth.

When asked who P.D. Magnus is, however, it replies, “I’m sorry, but I am not familiar with a person named P.D. Magnus.” At least, most of the time. One prompt generated, “P.D. Magnus is a fictional character and does not exist in reality.”1

Continue reading “This robot confabulates like a human”

This robot has read my work

Like lots of the internet, I’ve been playing around a bit with OpenAI’s ChatGPT. Lots of it is pretty mundane, because it avoids going too far off the rails. Often it either refuses to answer, giving an explanation that it is just a large language model designed to answer general knowledge questions.1

It can put together strings of text that have the air of fluency about them, but it is only tenuously tethered to the world.

I first asked about the Argument from Inductive Risk. ChatGPT (wrongly) answered that inductive risk is a kind of scepticism which implies that induction is “inherently unreliable” so that “that it is safer to rely on other forms of reasoning, such as deductive reasoning.”

I asked about the James-Rudner-Douglas thesis, which I expected it not to know about at all. The JRD thesis is a particular construal of inductive risk, and the phrase is really only used by me.2 Surprisingly, however, ChatGPT thinks it knows about the James-Rudner-Douglas thesis.

Continue reading “This robot has read my work”

What to call the fact that science traffics in assumptions?

I regularly teach a course called Understanding Science, an introduction to some issues in philosophy of science and science studies. One topic is the nature of inference: deduction, the fact that scientific inference is (largely) non-deductive, and the problem of induction.1

Continue reading “What to call the fact that science traffics in assumptions?”