Post before the end of the year!
I'm really enjoying Readings in Machine Translation -- it's got all of these great MT papers from past decades, going from the Warren Weaver memo from 1949 to the Brown et al. paper where they make stat-mt fashionable again in the early 90s. Apparently, a lot of the papers in the volume were somewhat hard to find online in 2003.
Really interesting: so far, the early papers have had some very detailed descriptions of the low-level particulars. "Well, we're going to need this many memory drums...", "oh, and the words will be stored in memory in alphabetical order" (which seems very archaic), and a fixation on picking the right word in the target language, in sort of a word-sense disambiguation sense (which is slightly fashionable again!).
So for people into MT who want a sense of history, these are papers that it seems like one should read -- I mean, Sergei Nirenburg and friends picked them out, so they've got to be good, right?
If you haven't read the 1949 Warren Weaver memo, though -- even if you're not an NLP person -- do yourself a favor and go ahead and read it!