The moving targets

Updates behind the scenes and new competitors make the conversations I’ve had before with LLMs yesterday’s news. Today’s topic is what today’s chatbots can say about books that I’ve written.

Asked to write a summary of A Philosophy of Cover Songs, ChatGPT replies that it can’t because “it is a fictitious book and does not exist in reality.”

Google’s Bard clearly was trained on a more recent instance of the internet, so my book is in its training set. Given several iterations of the same prompt, Bard consistently explains that the book is written in three parts. It gets the general theme of each section right, but runs amok on the details— that the book distinguishes between interpretations and appropriations, between covers as derivative works and covers as creative works, between a “traditional” approach to covers and a “revisionist” approach, between covers as homage and covers as a form of competition. All of these are bit like a pretentious drunk guy had browsed through my book and is explaining it to you in a loud bar.

Regarding Scientific Enquiry and Natural Kinds, ChatGPT reports that it is by Peter Godfrey-Smith. Its summary is otherwise decent. Bard correctly reports me as the author, but misdescribes the structure of the book. The difference may be due to the fact that Cover Songs is open access, so the whole book was probably part of Bard’s training set. For Natural Kinds, it is just extrapolating from available excerpts and summaries.

Then, forall x. ChatGPT lists the authors from the Calgary version. Bard just lists me as the author. Both give an imprecise but roughly accurate description.

Bard ends all of its summaries with anodyne praise. All the books are thought-provoking, well-written, engaging, insightful, clear, and a valuable contribution. But I get a similar list of adjectives when I ask for a brief summary of a book I just made up, so I won’t let it go to my head.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.