Question

I'm trying to use scikit.learn which needs numpy/scipy arrays for input. The featureset generated in nltk consists of unigram and bigram frequencies. I could do it manually, but that'll be a lot of effort. So wondering if there's a solution i've overlooked.

Was it helpful?

Solution

Not that I know of, but note that scikit-learn can do n-gram frequency counting itself. Assuming word-level n-grams:

from sklearn.feature_extraction.text import CountVectorizer, WordNGramAnalyzer
v = CountVectorizer(analyzer=WordNGramAnalyzer(min_n=1, max_n=2))
X = v.fit_transform(files)

where files is a list of strings or file-like objects. After this, X is a scipy.sparse matrix of raw frequency counts.

OTHER TIPS

Jacob Perkins did a a bridge for training NLTK classifiers using scikit-learn classifiers that does exactly that here is the source:

https://github.com/japerk/nltk-trainer/blob/master/nltk_trainer/classification/sci.py

The package import lines should be updated if you are using version 0.9+.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top