There are some articles that I read and think I ought to blog about that. Then I realize that I basically have. So this is basically a link dump kind of post.
I’ve written some posts about the glitzy fad for “deep learning”. It has the same strengths and weaknesses it had when it traveled under the less-shiny banner of “back-propagation neural networks”.
Procedures that are superficially objective can encode bias. I don’t have anything deep to say here, but I’ve blogged about it before.