Higher-order bullshit

Three snapshot applications of AI:

During early phases of the war in Gaza, the Israeli military used software to select bombing targets on a scale that would not have been possible for human analysts.1

During the Trump administration’s initial attack on the federal government, there was lots of nonsense about how Elon Musk and DOGE were using software to identify waste. House Speaker Mike Johnson commented that Musk has “created these algorithms that are constantly crawling through the data, and… the data doesn’t lie.”2

Health Secretary Robert F. Kennedy Jr. commissioned a report with the ridiculous title “Make America Healthy Again.”3 It turns out that many of the citations in the study are erroneous, including references to articles which simply do not exist. Incorrectly citing things and misrepresenting results is plausibly human malfeasance or incompetence, but totally inventing sources suggests chatbot hallucination.

The wrongs committed here are morally different, and I don’t want to suggest a false equivalence. But each of these cases provides a specimen of how reliance on AI has been used to further dangerous agendas. Yet the reliance on AI is really just a sideshow.

Continue reading “Higher-order bullshit”

Doctor gpt

At Daily Nous, there’s discussion of Rebecca Lowe’s post about how great it is to talk philosophy with the latest version of Chat GPT.

There’s pushback in the comments. Others reply that the critics haven’t used the latest version (which is only available behind a paywall). Discussion of LLMs will always allow this: Complaints about their shortcomings are answered by pointing to the next version that’s supposed to resolve all the issues.

Lowe and other commenters reveal that lots of philosophers are using LLMs in the regular day-to-day of their research. I’m still trying to figure out what I think about that. For now, let’s deflect Lowe’s ridiculous offhand claim that “Gpt could easily get a PhD on any philosophical topic.” I say ridiculous for a few reasons—

Continue reading “Doctor gpt”

It’s still rapacious capitalism

From Cory Doctorow:

The fact that AI can’t do your job, but that your boss can be convinced to fire you and replace you with the AI that can’t do your job, is the central fact of the 21st century labor market.

I’m not sure that it’s the central fact of contemporary labor, what with the resurgence of fascism, the retheming of jobs as gigs, and the casual evasion of hard-fought safeguards. But, as I’ve noted before, it is a thing.

Labour-squandering technology

Australian regulators sponsored a test using generative AI to summarize documents. The soft-spoken conclusion was that “AI outputs could potentially create more work… due to the need to fact check outputs, or because the original source material actually presented information better.”

Coverage of the study leads with the headline: AI worse than humans in every way at summarising information

The tangled web

In which I find myself unironically missing old, hard-copy Yellow Pages.

I came into the possession of a vintage sport coat which was in excellent condition except for several strata of dust on the shoulders, from hanging unused but uncovered for decades. The care instructions say dry clean only, so I went looking for a dry cleaner. The internet suggested there were several near me. On further examination, however, one was shuttered up. Another had remodeled and become just a regular laundromat.

Continue reading “The tangled web”

Engines of enshittification

Via Ars Technica, I’ve learned that shady Amazon sellers have been using chatbots to automatically write item descriptions. The result is hot offers on items like “I cannot fulfill that request” and “I apologize but I cannot complete this task.” This is a natural progression from Amazon product listings which were simply misdescribed by humans.

Continue reading “Engines of enshittification”

It took me years to write it

Fifteen years ago, I conducted a small study testing the error-correction tendency of Wikipedia. Not only is Wikipedia different now than it was then, the community that maintains it is different. Despite the crudity of that study’s methods, it is natural to wonder what the result would be now. So I repeated the earlier study and found surprisingly similar results.

That’s the abstract for a short paper of mine that was published today at First Monday. It is a follow-up to my earlier work on the epistemology of Wikipedia.

Continue reading “It took me years to write it”

Exchanging Marx for Lincoln

I just posted a draft of Generative AI and Photographic Transparency, a short paper that is about those things. It builds on two blog posts that I wrote a while ago, but fleshes out the discussion in several respects. Whereas the blog posts used pictures of Karl Marx as their specimen example, the paper instead considers pictures of Abraham Lincoln. The change lets me work in some quotes from William James and Oliver Wendell Holmes.

It is still a draft, so comments are welcome.

Generative AI and homogenization

Among the legitimate worries about Large Language Models is that they will homogenize diverse voices.4 As more content is generated by LLMs, the generic style of LLM output will provide exemplars to people finding their own voices. So even people who write for themselves will learn to write like machines.

Continue reading “Generative AI and homogenization”

Generative AI and rapacious capitalism

Some people have claimed that Large Language Models like ChatGPT will do for wordsmiths like me what automation has been doing to tradesfolk for centuries. They’re wrong. Nevertheless, there will be people who lose their jobs because of generative algorithms. This won’t be because they can be replaced, but instead because of rapacious capitalism. To put it in plainer terms, because their management is a bunch of dicks.

Continue reading “Generative AI and rapacious capitalism”