Like lots of the internet, I’ve been playing around a bit with OpenAI’s ChatGPT. Lots of it is pretty mundane, because it avoids going too far off the rails. Often it either refuses to answer, giving an explanation that it is just a large language model designed to answer general knowledge questions.1
It can put together strings of text that have the air of fluency about them, but it is only tenuously tethered to the world.
I first asked about the Argument from Inductive Risk. ChatGPT (wrongly) answered that inductive risk is a kind of scepticism which implies that induction is “inherently unreliable” so that “that it is safer to rely on other forms of reasoning, such as deductive reasoning.”
I asked about the James-Rudner-Douglas thesis, which I expected it not to know about at all. The JRD thesis is a particular construal of inductive risk, and the phrase is really only used by me.2 Surprisingly, however, ChatGPT thinks it knows about the James-Rudner-Douglas thesis.
My paper, The scope of inductive risk, has been accepted at the journal Metaphilosophy. I’m told it will appear in the January 2022 issue.
Abstract: The Argument from Inductive Risk (AIR) is taken to show that values are inevitably involved in making judgements or forming beliefs. After reviewing this conclusion, I pose cases which are prima facie counterexamples: the unreflective application of conventions, use of black-boxed instruments, reliance on opaque algorithms, and unskilled observation reports. These cases are counterexamples to the AIR posed in ethical terms as a matter of personal values. Nevertheless, it need not be understood in those terms. The values which load a theory choice may be those of institutions or past actors. This means that the challenge of responsibly handling inductive risk is not merely an ethical issue, but is also social, political, and historical.
I just read a nice paper by Gabriele Contessa on the mitigation of inductive risk, cleverly titled On the Mitigation of Inductive Risk.1 His primary question is whether responsibly applying values in science should left be left to individuals or whether there ought to be community-level processes. He offers a number of arguments against the individualistic approach and briefly sketches what a socialized approach might look like.
Last year I attended the annual Values in Medicine, Science, and Technology Conference hosted in Dallas and organized by Matt Brown.1
I got great feedback on my presentation, which ultimately grew into a paper. I hung out with old friends and made new ones.
So I submitted an abstract again this year. Today, I received an e-mail indicating that my paper was accepted along with an e-mail saying that the conference was canceled. The cancelation was inevitable, of course, but Matt had delayed officially canceling the conference until verdicts had been reached. This way would-be presenters can list the acceptance on their CV. It’s a classy move— I don’t need the line on my CV, but students and junior scholars might do.2
My missing the conference this year is not a terrible imposition, really, since I missed it for eight years before attending at all. It is a small sacrifice, in the grand scheme of things— but these accumulate like rain drops on the tin roof that is my inability to land a metaphor.
In several papers, I’ve made use of what I call the James-Rudner-Douglas (JRD) thesis: “Anytime a scientist announces a judgement of fact, they are making a tradeoff between the risk of different kinds of error. This balancing act depends on the costs of each kind of error, so scientific judgement involves assessments of the value of different outcomes.”
I have a paper forthcoming in Episteme which explores this theme as well as other issues in William James’ “The Will to Believe.” I just sent off my final draft and brought the version on my website up to date.
I’ve given a couple of talks in which I mull over possible counterexamples to the thesis. I recently wrote that as a paper, I’ve now posted a draft. Comments are welcome.
Back in July, Dan wrote a tweet that concluded “Anyone want to write a little response with me?” Jessey and I replied that we’d be game for it. E-mails followed. We each wrote a snippet of prose. The snippets got worked together into one document, and that document went through a bunch of revisions. We used a google doc, which highlighted changes and allowed us to make comments back and forth in the document itself. Other than a few e-mails, that’s how we interacted. No realtime conversations, even via skype.
I still use LaTeX for my own writing, but the collaborative workflow of the google doc worked really well for this project.
I can’t tell if Suki Finn’s Beyond Reason: The Mathematical Equation for Unconditional Love is meant to be taken seriously or not. Irony on the internet is usually indistinguishable from earnestness. The fact that there is an addendum with a mathematical proof may indicate that it’s serious, but maybe it’s a droll bit of farce?1
I read it with interest, in any case. Finn offers an analysis of conditional and unconditional love that is modeled on conditional and unconditional credence. As I’ve discussed in some recentposts, I think that recognizing the difference between conditional and unconditional value is crucial for understanding the relation between values and belief.2
My paper Science, Values, and the Priority of Evidence has been accepted at Logos&Episteme. I worked over the manuscript to meet their style guidelines, sent it off, and put the last draft on my website. Since it’s an OA journal, in the gratis and author-doesn’t-pay sense, I will swap in the published version when it appears.