Thursday, December 27, 2012

reading: Computer Power and Human Reason

Not long ago, I got a copy of Jospeh Weizenbaum's Computer Power and Human Reason from a pile of free books -- there's so much great stuff on the free books table when a professor retires!

I’d recommend reading it if you’re into the ethical issues surrounding computing or the history of AI. The book was published in 1976; our relationship with computing has changed a lot since then, and that was probably the most striking thing about reading the book now.

Weizenbaum was worried about trust that society placed in the computer systems of the time. He describes situations in which people felt they were slaves to systems too complex for people to understand and too far removed from human judgement to be humane; examples include planning systems that told pilots where to bomb during the Vietnam War. But the systems had computers involved, and they were made by experts, so they must be right! "Garbage in; gospel out". And down came the bombs.

I'd argue that we've become less dazzled by computers as such, that we no longer think of them as infallible. But perhaps we're less likely to think about the computers themselves at all. They've become ubiquitous, just the infrastructure that makes society work. My mother (a keen observer of technology) recently remarked that it's strange that we still call them "computers" when the point is to use them for communication. We may still have problems of blind obedience, but perhaps it's better understood as blind obedience to people.

Similarly, Weizenbaum was concerned about the social power wielded by scientists, engineers, and other experts. To me, in the fair-and-balanced political climate, this sounds like a good problem to have: people used to listen to experts? Did they listen to experts when they said things that were politically inconvenient for those with money? Perhaps not...

Computer Power and Human Reason also spends some time with the exuberant claims about AI from before the AI Winter. Herbert Simon said, "... in a visible future – the range of problems they [machines] can handle will be coextensive with the range to which the human mind has been applied", which was clearly somewhat premature. But we have made progress on a lot of fronts! Weizenbaum was quite skeptical that machine translation would be any good, despite claims (which he relates in the book) that MT really just needed more processing power and more data. A few decades later, MT is often pretty good! All it took was more processing power and more data.

There's also some beautifully strange writing. Towards the beginning, he spends a few chapters explaining how computers work, in a formal, abstract way. And then we get this:
Suppose there were an enormous telephone network in which each telephone is permanently connected to a number of other telephones; there are no sets with dials. All subscribers constantly watch the same channel on television, and whenever a commercial, i.e., an active interval, begins, they all rush to their telephones and shout either "one" or "zero," depending on what is written on a notepad attached to their apparatus. ...
I have trouble imagining that this metaphor has helped many people understand digital logic circuits; but I enjoyed reading the book! Perhaps you'd enjoy it as well.

Saturday, July 07, 2012

this is something new and beautiful: Coursera and Udacity

Just last week, I finished the coursework for Coursera's machine learning class. It was great! I had a really good time with it, and I'm fairly proud of the accomplishment.

If you've been within earshot of me in the past few months, you probably know that I'm really excited about Coursera and Udacity and their ilk (including, but not limited to, edX, Khan Academy, and Duolingo). There are two experiences I'd like to contrast with taking a course on Coursera.

Some years ago, I was living in Atlanta and working a real job. And I went over to the Georgia Tech math department to see about taking some masters-level statistics classes, imagining that they would let me pay them lots of money in exchange for taking classes at the university where I had just graduated months prior. But it turned out that they wouldn't let me do this without being admitted for a full-time degree program.

Fewer years ago, I was starting my PhD at Indiana, and knowing exactly what I was there to learn, I picked out three classes: one on NLP, a computational linguistics class (from the Linguistics department), and one from Stats. I got a mild hassle from the department about my choices: these were all "fun" classes, and shouldn't I work on fulfilling my breadth requirements? I've since finished my IU coursework, and let me say: not all the classes I had to take as a result were very interesting, or even very well taught. Some were downright bad.

But now there are free online courses that are meant to be good, such that you take the ones you're interested in taking, as opposed to expensive in-person courses that may not be good, but you're obliged to take them anyway -- this is huge.

Whether or not you think that teaching in person is going to stay relevant, not everybody has access to good teachers in person. This remains true even for people at universities.

Moreover, online classes lower the barriers to entering or leaving a course to almost nothing. Want to sign up for a class just to try it out? Nothing could be easier! Don't enjoy it, or it's not what you thought it was, or find out you're busy with other stuff? Nothing lost, try a different one! But if you stick it out and put in the effort, then not only have you learned something, but also you get a certificate that says you finished! (maybe these could be OpenBadges sooner or later...)

There are going to be lots of bytes spilled about these things in the coming years, but just to make it clear: I'm jazzed about helping people who want to learn things get access to material about those things. And the World Music class is starting up soon, which my mother and I are going to take! Because why not?

Saturday, June 30, 2012

happy hardware review: usb wireless adapter from ThinkPenguin

I'm in Mountain View for the summer, working on Google Translate for another internship with that company that I seem to work for fairly often. Hooray!

Unfortunately, my laptop's built-in wireless card really doesn't agree with the apartment complex's wireless. So I ordered a little USB stick wireless adapter from ThinkPenguin, and it came pretty quickly, and I plugged it in (and told wicd to look at wlan1 instead of wlan0), and it just worked! Now my wireless connection is pretty fast, and doesn't drop every five minutes! (unlike before; it was a serious pain.)

Particularly, I got this one. Their other products may also be lovely. Thanks, ThinkPenguin!

Wednesday, May 30, 2012

take five minutes: support open access

tl;dr: Sign this petition to support open access for publicly-funded research!! http://wh.gov/6TH

Here's the situation: there's lots of scholarly work being done. And you, as a citizen of a country, are paying academics to do science (or whatever), write about it, and review the work of other scholars. The work that makes it through the reviewing process gets published, typically in a journal or at a conference.

Here's the problem: a lot of that scholarly work is then inaccessible to you. You have to pay to read it, and often you have to pay a lot. If you're at a well-funded academic institution, your university library has to pay a lot. It's a serious problem for universities as wealthy as Harvard. Where does this money go to? It doesn't go to the academics who wrote the papers, or those who reviewed them: it goes to publishing companies with absurd profit margins who have trouble pointing at what value they add to the process, aside happening to own prestigious journals.

Concretely, this is a problem for the independent researcher, for the small business developer-of-stuff who wants to get the latest developments, for the interested public who wants to read and learn and grow, for the precocious teenager. I've come to care kind of a lot about this issue: it's because I believe in science. I think it's pretty important: it should get out to as many people as possible, not just because the citizens paid for it in the first place, but also so we can make progress faster.

The National Institutes of Health have famously set up an Open Access mandate: all the research that they fund must be available to the public pretty soon after it's published. Many universities are doing the same thing. The Association for Computational Linguistics (who run the conferences and journals where I'm personally likely to publish), do a bang-up job of making all of their articles publicly available, and I'm really proud to be associated with them. But not every professional organization, and not every field's journal are like this. Most are not!

How can you help? Right now, there's a petition on the White House website where you can ask the administration to expand the NIH-style mandate to other funding agencies: I'd really appreciate if you'd take a minute to make an account and sign the petition. Click here: http://wh.gov/6TH

(hrm, I seem to have written about this back in 2007 too)

Thursday, May 10, 2012

command line tricks: ps, grep, awk, xargs, kill

I recently learned a little bit of awk; if it's not in your command line repertoire, it's worth looking into! awk lets you do things like this:

$ ps auxww | grep weka | grep -v grep | awk '{print $2}' | xargs kill -9

Let's unpack what's going on here. First, we list all processes, in wide format (that's the "auxww" options to ps), then we filter with grep to only include lines that include "weka". When I wrote this line, I was debugging a long-running machine learning task, so I would start it running, then if (when) I found a bug and wanted to restart, I used this command to kill it.

Now we have all the lines of output from ps that include "weka". Unfortunately, this includes the grep process that's searching for "weka"! No problem, just use "grep -v" to filter out the lines that include "grep".

This is where awk comes in. We want to get the process numbers out of the ps output. It seems like we could use cut to just get the second column, but we don't know how wide that column is going to be! Maybe there's a cut option for that, but I don't think there is. Instead, we just use a tiny awk script that prints the second whitespace-delimited thing on each line.

Finally, we use xargs to take the process numbers and make them be arguments to kill. xargs is great: it takes each line of its standard input and makes those lines argument to a program (ie, its first argument). Usually I use xargs in combination with find, "svn status", or "git status -s", to do the same thing to batches of files. Maybe delete them or add them to version control or whatever.

Thoughts? Better ways to do this sort of thing?

Saturday, April 28, 2012

startlingly bad moments in API design

Weka, the machine learning toolkit, has these nice filters that let you change what's in a data set, maybe the features on the instances, or the instances themselves. Pretty useful. One is called "Remove", and it removes features. Here's a case in Weka where order matters when you're setting up the parameters for an object.

Like so: this does not remove any features.
Remove remove = new Remove();
remove.setInputFormat(instances);
remove.setAttributeIndices("7,10,100");
remove.setInvertSelection(true); // delete the other ones.
Instances out = Filter.useFilter(instances, remove);
This works just fine, though:
Remove remove = new Remove();
remove.setAttributeIndices("7,10,100");
remove.setInvertSelection(true); // delete the other ones.
remove.setInputFormat(instances);
Instances out = Filter.useFilter(instances, remove);
How are you supposed to find that out?

Thursday, March 29, 2012

quals writeup: Tree Transducers, Machine Translation, and Cross-Language Divergences

I hope it's not too pretentious to put things I'm writing for my phd qualifiers on arXiv. I think arXiv is really exciting, by the way. Leak your preprints there! Also pretty exciting: tree transducers for machine translation.

Abstract:
Tree transducers are formal automata that transform trees into other trees. Many varieties of tree transducers have been explored in the automata theory literature, and more recently, in the machine translation literature. In this paper I review T and xT transducers, situate them among related formalisms, and show how they can be used to implement rules for machine translation systems that cover all of the cross-language structural divergences described in Bonnie Dorr's influential article on the topic. I also present an implementation of xT transduction, suitable and convenient for experimenting with translation rules.
Paper! http://arxiv.org/abs/1203.6136

Software! http://github.com/alexrudnick/kurt

Thursday, January 19, 2012

and we're back

Well done, Internets!

In solidarity with everybody doing the #j18 protests against SOPA and PIPA, I blacked out this blog and my academic web page; I'd be really surprised if this directly caused anybody to call any legislators. My email to my family on the topic was probably more effective: one of my uncles wrote back, saying he'd signed a petition. So: rad!

The protests seem to have been incredibly loud and fairly effective. At this point, a congressperson would have to be incredibly dense to not get the sense that the public outcry against censoring the Internet in the US is enormous. A number of Republicans, including some former co-sponsors, have taken the opportunity to switch to opposing the bills, which seems politically expedient. (article on DailyKos about this. Kos wonders why Democrats seem to be willing to be left holding the bag...)

But even if we manage to get SOPA and PIPA scrapped, we're still left with two fundamental problems.

(1) The MPAA and RIAA can try to break the Internet again later, because they'll still own a significant number of congresspeople. Say if OPEN picks up steam and gets passed, will that be enough for them? The music and movie industries have fought tooth-and-nail against new technologies for decades; what's to stop them from taking another run against the Internet, or against whatever we have in the future? How can we reduce their money and influence, over time? As an angry Internet activist, getting all of your family and friends to boycott all media produced by major labels and studios seems extraordinarily hard. I must admit: I totally bought an Andrew W.K. CD not too long ago, and we have Netflix at our house. Should we cancel it?

(2) More fundamentally: large companies can own congresspeople. How can we take control of Congress, as citizens? Note that I didn't say "take back Congress" -- there's been a disconcerting connection between money and power for our entire history. If you haven't read Howard Zinn's A People's History of the United States, I highly recommend it.

People much more insightful and more dedicated than me have written quite a lot about this, but I suspect the solution really is campaign finance reform. Simply taking away the incentives to make awful decisions, while encouraging good behavior that at least some people like would probably result in a Congress that... makes fewer awful decisions and has a double-digit approval rating. I think term limits would also be useful, so that congresspeople don't have to worry about re-election so often (although, how to incentivize good behavior in the last term? ...), and some sort of rules to keep former congresspersons from becoming lobbyists, so we can prevent the Chris Dodd situation from happening again. He still goes by "Senator Dodd", but this year he quit the senate to become the head of the MPAA.

Oh, also: Super PACs, and the Citizens United decision.

However we move forward: today, a large number of people who had never tried to contact their elected representatives, now have. The more often we do it, the lower the psychological barrier! There are even phone apps: (android, iphone). I use the Android one all the time; it's extremely convenient.

Right. Anyway. Stop reading this blog, and go read what Lawrence Lessig has to say. I'll get back to doing some computational linguistics. Maybe you should too!