In March 2014, I attended a workshop on natural kinds in Paris. Other attendees included Matt Slater, Muhammad Ali Khalidi, and Thomas Reydon. It seemed to me that, although we disagreed about many of the details, we shared a core conception of natural kinds.1 I mooted the idea of writing a consensus statement. We could give it a flashy name, refer to in our writing, and then maybe other people would start using the phrase too.
Today, while moving the last papers out of my old office, I came across an outline from the conference. Here I’ve quoted it exactly, including the all-caps title.2 Despite agreement from at least some of the others, nobody else assented to sign on.
THE SPRINGTIME in PARIS VIEW
NKs should be understood by way of scientificclassification
they are natural to the extent that the world constrains classificatory categories3
metaphysical depth is attained by starting superficially and, by considering evidence, making contingent a posteriori claims of greater depth
Over at Crooked Timber, Harry Brighouse exhorts readers to write philosophical clerihews.
I was unfamiliar with the form, but it’s not complicated. A clerihew is a four-line poem about some person or other with an AABB rhyming pattern. The sort of thing Ogden Nash would have written if he’d been less pithy.1
I contributed a couple, and I shamelessly cut and paste them here.
The Scotsman Thomas Reid
had a commonsensical creed,
a fondness for calico cats,
and questionable taste in hats.
The mustachioed John Dewey
might have gone all kablam and kablooey
if he had not understood inquiry
in a way that avoided such injury.
Mark Simonson’s blog got me thinking about information technology and the original aspirations of hypertext. Simonson laments that current technology is too much driven by concepts taken from print media. Part of the problem is the lack of a clearly defined alternative. Ted Nelson, who coined the word “hypertext”, had a vision of multiple texts floating on-screen with lines connecting points in one to points in another. I don’t see how that wouldn’t end up like items on a cork board linked by lengths of yarn, the idiom for madness from A Beautiful Mind which has become Hollywood shorthand for crazy conspiracy theories.
Old school blogging actually seems like a pretty good realization of hypertext. Good blog post take a while to write because you’ve got to provide pointers so that someone who hasn’t got context or who is curious can follow up. Someone who wants even more can search on key terms.
All of this crystallized for me what I don’t like about Twitter. In order to cut a thought down to Tweet length, people leave out context. What are they enraged about? What’s the thrust that drew their clever riposte? I can’t always tell.
Sometimes thoughts that won’t fit into a single tweet are written as a stream, possibly with numbered entries 1/9, 2/9,… I see entry 4 of 9 because someone reweeted it, and it’s a serious investment of effort just to view the original series in order. Even then, I can’t always suss out the context.
Twitter, in short, is hypotext. It eschews the links of hypertext but also the context you’d expect from a letter or newspaper article.
Part of the shift is that many people go on-line primarily with phones or tablets, appliances that are great for scrolling and clicking but bad for following multiple threads. Twitter and Facebook turn our feeds into one-dimensional things. We can scroll through, liking and reposting as we go. But reposting just drops another log somewhere into the flume.
Based on your own sense of how words work, pick one of the following:
Every word is an anagram of itself.
Some but not all words are anagrams of themselves.
No word is an anagram of itself.
There’s a principled case to be made for every answer. Cristyn and I hashed it out over goat cheese last night, but I won’t tell you the considerations we mustered on various sides or what we concluded. I’m curious about what you think.
I recently commented on the fact that machine learning with neural networks now regularly gets called “AI”. I find the locution perplexing, because these machine learning problems have success conditions set up by engineers who defined the inputs and outputs.
Here is another headline which doubles down on the locution, discussing AIs creating AIs. Yet having a neural network solve an optimization problem is still machine learning in a constrained and specified problem space, even if it’s optimizing the structure of other neural networks.
There is a lot of buzz about AI and the prospect that computers will soon be doing something hugely different than what they’re doing now. It’s apprehension of what Ray Kurzweil calls the singularity, except that people don’t call it that much anymore.
Via Daily Nous, I came across a free set of text analysis tools by Voyant. You can paste in a passage or point it at some URLs, and it will chop it into words and phrases.
I let it chew on my book, and one of the products was this graph of word density:
It looks all sciencey, like the kind of think that prop people might put on a screen in the background of a lab scene. It isn’t very informative, though. The curve has “species” dipping below zero, even though it occurs at least once in every segment.
I learned that “natural”, “kind”, and “kinds” make up about three percent of the words in the book. That three percent was, I suppose, the easiest part to write.