Two links about AI

There are some articles that I read and think I ought to blog about that. Then I realize that I basically have. So this is basically a link dump kind of post.

Link #1: Geoffrey Hinton cautions that deep learning is not especially deep

I’ve written some posts¬†about the glitzy fad for “deep learning”. It has the same strengths and weaknesses it had when it traveled under the less-shiny banner of “back-propagation neural networks”.

Link #2: Efforts to understand the bias inherent in algorithms

Procedures that are superficially objective can encode bias. I don’t have anything deep to say here, but I’ve blogged about it before.

Tweets point nowhere

Mark Simonson’s blog got me thinking about information technology and the original aspirations of hypertext. Simonson laments that current technology is too much driven by concepts taken from print media. Part of the problem is the lack of a clearly defined alternative. Ted Nelson, who coined the word “hypertext”, had a vision of multiple texts floating on-screen with lines connecting points in one to points in another. I don’t see how that wouldn’t end up like items on a cork board linked by lengths of yarn, the idiom for madness from A Beautiful Mind which has become Hollywood shorthand for crazy conspiracy theories.

Old school blogging actually seems like a pretty good realization of hypertext. Good blog post take a while to write because you’ve got to provide pointers so that someone who hasn’t got context or who is curious can follow up. Someone who wants even more can search on key terms.

All of this crystallized for me what I don’t like about Twitter. In order to cut a thought down to Tweet length, people leave out context. What are they enraged about? What’s the thrust that drew their clever riposte? I can’t always tell.

Sometimes thoughts that won’t fit into a single tweet are written as a stream, possibly with numbered entries 1/9, 2/9,… I see entry 4 of 9 because someone reweeted it, and it’s a serious investment of effort just to view the original series in order. Even then, I can’t always suss out the context.

Twitter, in short, is hypotext. It eschews the links of hypertext but also the context you’d expect from a letter or newspaper article.

Part of the shift is that many people go on-line primarily with phones or tablets, appliances that are great for scrolling and clicking but bad for following multiple threads. Twitter and Facebook turn our feeds into one-dimensional things. We can scroll through, liking and reposting as we go. But reposting just drops another log somewhere into the flume.

Reader query, re: anagrams

Based on your own sense of how words work, pick one of the following:

  • Every word is an anagram of itself.
  • Some but not all words are anagrams of themselves.
  • No word is an anagram of itself.

There’s a principled case to be made for every answer. Cristyn and I hashed it out over goat cheese last night, but I won’t tell you the considerations we mustered on various sides or what we concluded. I’m curious about what you think.

AI ai ai

I recently commented on the fact that machine learning with neural networks now regularly gets called “AI”. I find the locution perplexing, because these machine learning problems have success conditions set up by engineers who defined the inputs and outputs.

Here is another headline which doubles down on the locution, discussing AIs creating AIs. Yet having a neural network solve an optimization problem is still machine learning in a constrained and specified problem space, even if it’s optimizing the structure of other neural networks.

Brave new age of robot overlords this ain’t.

Continue reading “AI ai ai”

Bunflow is more tan than Caring Tan is

Algorithmically-generated designer colours (from post by Janelle Shane)

There is a lot of buzz about AI and the prospect that computers will soon be doing something hugely different than what they’re doing now. It’s apprehension of what Ray Kurzweil calls the singularity, except that people don’t call it that much anymore.[1]

Under the headline An AI invented a bunch of new paint colors that are hilariously wrong, Annalee Newtiz discusses the result of training neural networks on Sherwin-Williams decorator colour names. The original work was done by Janelle Shane, who recounted it in a Tumblr post.

Some whinging below the fold.

Continue reading “Bunflow is more tan than Caring Tan is”

Words plotted thusly

Via Daily Nous, I came across a free set of text analysis tools by Voyant. You can paste in a passage or point it at some URLs, and it will chop it into words and phrases.

I let it chew on my book, and one of the products was this graph of word density:

word density

It looks all sciencey, like the kind of think that prop people might put on a screen in the background of a lab scene. It isn’t very informative, though. The curve has “species” dipping below zero, even though it occurs at least once in every segment.

I learned that “natural”, “kind”, and “kinds” make up about three percent of the words in the book. That three percent was, I suppose, the easiest part to write.


At the Palais Royale

This is a poem written several years ago, back in May 2011. I came across it while looking through some old files. It resonates just at the moment, when my Facebook feed is flooded with paeans for Prince.

At the Palais Royale

Anyone can use these words in any situation.
These words are in no way special.

You kind of have already been there.
Say yes or no, uninterrupted.
This guy is, but that is not.

Prince is destroying
the minds of our Christian children,
because he was sexually deviant.
You know Superman.
Scooby doo.
That was racy.

My background was askew.
There were always fights,
but the bus driver didn’t care.
One of my first memories.
The craziest shit —
it was stamps, too.
We’d alternate mornings.

I can tell you every top ten soul song from that year.
It’s really not about making the music.

Sade disappoints me
White enough.
Beautiful, it reminded me of a concentration camp.
It reminded me of the moon.