After recent posts about AI image generation, I’ve been mulling over whether there are any interesting philosophical lessons. Despite the title, this is not a post about AI and the twilight of capitalism. Instead, it’s about the would-be transparency of images.1Continue reading “Can AI let me see Karl Marx?”
I’m teaching Philosophy of Art this semester, and a student pointed me to an Ars Technica story with the headline AI wins state fair art contest, annoys humans. Jason Allen used Midjourney (the same AI that I was playing with recently) to make some images and enter them in the Colorado State Fair art contest. One of those images won first place in the Digital Arts/Digitally Manipulated Photography category.
There’s lots of discussion about whether this is the end for human artists (it’s not), whether this shows that AI are now making real art (no), and whether the submission of AI-generated images to the State Fair was dishonest (maybe).Continue reading “Robot overlords win blue ribbon (not really)”
Several years ago, my colleague Jason D’Cruz and I set on the idea of writing something about Goodman’s autographic/allographic distinction. In the course of our discussions, he introduced me to Sol LeWitt’s wall drawings. I went down a rabbit hole of reading about them. I saw the exhibition at MassMOCA. I devised a wall drawing of my own.
But our work went in other directions, and we didn’t publish anything about LeWitt or about wall drawings. After a reading group this summer, he commented that this was a shame. So I sent off a short item which has now appeared in Contemporary Aesthetics: That Some of Sol Lewitt’s Later Wall Drawings Aren’t Wall Drawings
The referee commented that this note could have appeared in a longer paper about conceptualism and the nature of art. It could have, perhaps, except that waiting on that longer paper to write itself would probably mean never publishing this bit.
There are some articles that I read and think I ought to blog about that. Then I realize that I basically have. So this is basically a link dump kind of post.
I’ve written some posts about the glitzy fad for “deep learning”. It has the same strengths and weaknesses it had when it traveled under the less-shiny banner of “back-propagation neural networks”.
Procedures that are superficially objective can encode bias. I don’t have anything deep to say here, but I’ve blogged about it before.
I recently commented on the fact that machine learning with neural networks now regularly gets called “AI”. I find the locution perplexing, because these machine learning problems have success conditions set up by engineers who defined the inputs and outputs.
Here is another headline which doubles down on the locution, discussing AIs creating AIs. Yet having a neural network solve an optimization problem is still machine learning in a constrained and specified problem space, even if it’s optimizing the structure of other neural networks.
Brave new age of robot overlords this ain’t.
There is a lot of buzz about AI and the prospect that computers will soon be doing something hugely different than what they’re doing now. It’s apprehension of what Ray Kurzweil calls the singularity, except that people don’t call it that much anymore.
Under the headline An AI invented a bunch of new paint colors that are hilariously wrong, Annalee Newtiz discusses the result of training neural networks on Sherwin-Williams decorator colour names. The original work was done by Janelle Shane, who recounted it in a Tumblr post.
Some whinging below the fold.
[N]early all of the communication calls of the Egyptian fruit bat in the roost are emitted during aggressive pairwise interactions, involving squabbling over food or perching locations and protesting against mating attempt.
Using algorithms, researchers were able to discern differences in bat griping depending both on who the target bat was (who was being griped with) and the context (what the griping was about).
I have argued that, for a domain of enquiry that includes meerkats in their natural environment, different meerkat alarm calls and the classes of threats which elicit them comprise natural kinds (see ch4). That admits six kinds, because there are three different calls and three corresponding classes of threats.
There’s no reason why the argument doesn’t generalize. For fruit bat groups in their environment, there may well be natural kinds corresponding to distinct classes of vocalizations and to the classes of objects picked out by those vocalizations. But what if it turns out that bat reference to individual other bats uses sounds functioning in the fashion of proper names? Suppose there’s an individual bat that the other bats pick out with a specific squeeky sound, something like “leeko leeko leeko”. Does that individual bat count as a natural kind?
One might think the answer has to be no, because kinds and individuals are different ontological categories. I’m not tempted by that, however. As I argue, species might turn out to be continuous individuals (in their fundamental ontology) but still count as natural kinds (see ch6).
Nevertheless, the category for the specific bat leeko could only be a natural kind for the domain including that specific bat population. And it might lack enough general importance to be a natural kind for a domain that includes all the Egyptian fruit bat populations across both space and time. So my account doesn’t require the answer to be yes.
Moreover, it’s not clear to me from the recent report whether the distinctions between bat vocalizations are clear and sharp enough to count as natural kinds. As always, the answer will depend on the details.
The LA Times has an interesting interview with self-described “data skeptic” Cathy O’Neil, the author of Weapons of Math Destruction. Although the Times puts her skepticism in terms of big data, her concerns are really about values in science. Algorithms, she suggests, have a veneer of objectivity but always reflect choices and valuations. When the algorithms are secret, then the values incorporated in them aren’t open to scrutiny. She says:
I want to separate the moral conversations from the implementation of the data model that formalizes those decisions. I want to see algorithms as formal versions of conversations that have already taken place.
She also makes a point about how polling isn’t just objectively reporting on the state of the electorate, something I would probably have mused about if I’d written the post about the election that I never quite wrote:
[P]olitical polls are actually weapons of math destruction. They’re very influential; people spend enormous amounts of time on them. They’re relatively opaque. But most importantly, they’re destructive in various ways. In particular, they actually affect people’s voting patterns. … Polls can change people’s actual behavior, which disrupts democracy in a direct way.
I’ve ordered a copy of her book, and when it arrives I will put it on top of the stack of books I regret not having read.