Question

Hi i've a problem with nltk (2.0.4): I'm trying to stemming the word 'men' or 'teeth' but it doesn't seem to work. Here's my code:

############################################################################
import nltk
from nltk.corpus import wordnet as wn
from nltk.stem.wordnet import WordNetLemmatizer

lmtzr=WordNetLemmatizer()
words_raw = "men teeth"
words = nltk.word_tokenize(words_raw)
for word in words:
        print 'WordNet Lemmatizer NOUN: ' + lmtzr.lemmatize(word, wn.NOUN)
#############################################################################

This should print 'man' and 'tooth' but instead it prints 'men' and 'teeth'.

any solutions?

Was it helpful?

Solution

I found the solution! I checked the files in wordnet.py the folder /usr/local/lib/python2.6/dist-packages/nltk/corpus/reader and i noticed that the function _morphy(self,form,pos) returns a list containing stemmed words. So i tried to test _morphy :

import nltk
from nltk.corpus import wordnet as wn
from nltk.stem.wordnet import WordNetLemmatizer

words_raw = "men teeth books"
words = nltk.word_tokenize(words_raw)
for word in words:
        print wn._morphy(word, wn.NOUN)

This program prints [men,man], [teeth,tooth] and [book]!

the explanation of why lmtzr.lemmatize () prints only the first element of the list, perhaps it can be found in the function lemmatize, contained in the file 'wordnet.py' which is in the folder /usr/local/lib/python2.6/dist-packages/nltk/stem.

def lemmatize(self, word, pos=NOUN):
    lemmas = wordnet._morphy(word, pos)
    return min(lemmas, key=len) if lemmas else word

I assume that it returns only the shorter word contained in the word list, and if the two words are of equal length it returns the first one; for example 'men' or 'teeth'rather than 'man' and 'tooth'

OTHER TIPS

There is nothing wrong with the wordnetlemmatizer per se but it just can't handle irregular words well enough. You could try this 'hack' and try to find the closest lemma_names for the synset:

>>> from nltk.stem import WordNetLemmatizer
>>> wnl = WordNetLemmatizer()
>>> word = "teeth"
>>> wnl.lemmatize(word)
'teeth'
>>> wnlemmas = list(set(list(chain(*[i.lemma_names() for i in wordnet.synsets('teeth')]))))
>>> from difflib import get_close_matches as gcm
>>> [i for i in gcm(word,wnlemmas) if i != word]
[u'tooth']

>>> word = 'men'
>>> wnlemmas = list(set(list(chain(*[i.lemma_names() for i in wordnet.synsets(word)]))))
>>> gcm(word,wnlemmas)
[u'men', u'man']
>>> [i for i in gcm(word,wnlemmas) if i != word]
[u'man']

However the fact that wordnet.synsets('men') can fetch the right synset and WordNetLemmatizer().lemmatize('men') can't suggest that there is also something missing from the WordNetLemmatizer code.


To extend the exception list, see also: Python NLTK Lemmatization of the word 'further' with wordnet

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top