List comprehensions. I've been a fan of these for a while, but I'd like to share:
tenpercent = len(lines) / 10
testset = random.sample(lines, tenpercent)
trainingset = [line for line in lines if line not in testset]
Python makes me warm and fuzzy on the inside. Also, random.sample() is pretty sexy!
Saturday, March 31, 2007
Wednesday, March 28, 2007
brains 'n' balancing training data
- Martin points us to a nice article over on Developing Intelligence: 10 Important Differences Between Brains and Computers. Your computational metaphor just breaks down eventually, y'know? The brain is not very much like a Von Neumann computer. It's a lot squishier.
If building classifiers is your thing, you may be interested to take a look at these articles:
- Gustavo E. A. P. A. Batista , Ana L. C. Bazzan, and Maria Carolina Monard: Balancing Training Data for Automated Annotation of Keywords: a Case Study.
Three researchers, seven middle names, one novel technique for building balanced data sets out of unbalanced ones for training classifiers: generate new instances of your minority class by interpolating between examples actually in your dataset. I'm still trying to decide whether this approach should work for the general case -- does it make too many assumptions about the shape of the space? Particularly: can you arbitrarily draw lines (in higher-dimensional space) between positive instances? What if there are negative instances between those two? Which dimensions do you look at first, and how is this better than just adding some noise or weighting positive examples higher? (is that last option the same as simply counting them several times?)
- Foster Provost: Machine Learning from Imbalanced Data Sets 101.
A basic overview of the problem, examining the motivation for building classifiers at all and some different approaches to sampling. The award for Best Name Ever goes to Dr. Foster Provost.
If building classifiers is your thing, you may be interested to take a look at these articles:
- Gustavo E. A. P. A. Batista , Ana L. C. Bazzan, and Maria Carolina Monard: Balancing Training Data for Automated Annotation of Keywords: a Case Study.
Three researchers, seven middle names, one novel technique for building balanced data sets out of unbalanced ones for training classifiers: generate new instances of your minority class by interpolating between examples actually in your dataset. I'm still trying to decide whether this approach should work for the general case -- does it make too many assumptions about the shape of the space? Particularly: can you arbitrarily draw lines (in higher-dimensional space) between positive instances? What if there are negative instances between those two? Which dimensions do you look at first, and how is this better than just adding some noise or weighting positive examples higher? (is that last option the same as simply counting them several times?)
- Foster Provost: Machine Learning from Imbalanced Data Sets 101.
A basic overview of the problem, examining the motivation for building classifiers at all and some different approaches to sampling. The award for Best Name Ever goes to Dr. Foster Provost.
Friday, March 23, 2007
reading feeds over the web
It seems like all the cool kids are using Google Reader these days, and I must admit, I'm impressed. The interface is so clean, and a web-based feed reader where you can aggregate all of your habitual reading seems like the right thing. And the ability to share items from your feed with your friends without making link-only blog posts (or forwarding emails around) is pretty compelling. On the other hand, the point of so much of the web thus far has been linking to other parts of the web: it has a mysterious self-referential nature... does this sort of sharing diminish that aspect? Does a meta-feed like this put you, as the independent media maven that you are, on a different level than the well-established weblogs? When you stop putting links in your blog and publish a Google Reader feed, are you more like BoingBoing or metafilter, or less?
It is the future. We've got dynabooks and memexes, and we use them to distribute pictures of cats doing cute things.
O my vast readership, I address to you this question: how do you read news online? Do you have some separate feed reader program? Do you use your browser's RSS features? Google Reader? Your LiveJournal friends page? Something else?
And moreover: for the LiveJournal denziens, does anyone know of a good method for reading "friends-only" posts through Google Reader? There are a few posts out in the world on this topic, but nobody seems to have a decisive answer yet... perhaps we can answer the question definitively.
It is the future. We've got dynabooks and memexes, and we use them to distribute pictures of cats doing cute things.
O my vast readership, I address to you this question: how do you read news online? Do you have some separate feed reader program? Do you use your browser's RSS features? Google Reader? Your LiveJournal friends page? Something else?
And moreover: for the LiveJournal denziens, does anyone know of a good method for reading "friends-only" posts through Google Reader? There are a few posts out in the world on this topic, but nobody seems to have a decisive answer yet... perhaps we can answer the question definitively.
Subscribe to:
Posts (Atom)