Thursday, December 27, 2012

reading: Computer Power and Human Reason

Not long ago, I got a copy of Jospeh Weizenbaum's Computer Power and Human Reason from a pile of free books -- there's so much great stuff on the free books table when a professor retires!

I’d recommend reading it if you’re into the ethical issues surrounding computing or the history of AI. The book was published in 1976; our relationship with computing has changed a lot since then, and that was probably the most striking thing about reading the book now.

Weizenbaum was worried about trust that society placed in the computer systems of the time. He describes situations in which people felt they were slaves to systems too complex for people to understand and too far removed from human judgement to be humane; examples include planning systems that told pilots where to bomb during the Vietnam War. But the systems had computers involved, and they were made by experts, so they must be right! "Garbage in; gospel out". And down came the bombs.

I'd argue that we've become less dazzled by computers as such, that we no longer think of them as infallible. But perhaps we're less likely to think about the computers themselves at all. They've become ubiquitous, just the infrastructure that makes society work. My mother (a keen observer of technology) recently remarked that it's strange that we still call them "computers" when the point is to use them for communication. We may still have problems of blind obedience, but perhaps it's better understood as blind obedience to people.

Similarly, Weizenbaum was concerned about the social power wielded by scientists, engineers, and other experts. To me, in the fair-and-balanced political climate, this sounds like a good problem to have: people used to listen to experts? Did they listen to experts when they said things that were politically inconvenient for those with money? Perhaps not...

Computer Power and Human Reason also spends some time with the exuberant claims about AI from before the AI Winter. Herbert Simon said, "... in a visible future – the range of problems they [machines] can handle will be coextensive with the range to which the human mind has been applied", which was clearly somewhat premature. But we have made progress on a lot of fronts! Weizenbaum was quite skeptical that machine translation would be any good, despite claims (which he relates in the book) that MT really just needed more processing power and more data. A few decades later, MT is often pretty good! All it took was more processing power and more data.

There's also some beautifully strange writing. Towards the beginning, he spends a few chapters explaining how computers work, in a formal, abstract way. And then we get this:
Suppose there were an enormous telephone network in which each telephone is permanently connected to a number of other telephones; there are no sets with dials. All subscribers constantly watch the same channel on television, and whenever a commercial, i.e., an active interval, begins, they all rush to their telephones and shout either "one" or "zero," depending on what is written on a notepad attached to their apparatus. ...
I have trouble imagining that this metaphor has helped many people understand digital logic circuits; but I enjoyed reading the book! Perhaps you'd enjoy it as well.