Question

I see the following script snippet from the gensim tutorial page.

What's the syntax of word for word in below Python script?

>> texts = [[word for word in document.lower().split() if word not in stoplist]
>>          for document in documents]
Was it helpful?

Solution

This is a list comprehension. The code you posted loops through every element in document.lower.split() and creates a new list that contains only the elements that meet the if condition. It does this for each document in documents.

Try it out...

elems = [1, 2, 3, 4]
squares = [e*e for e in elems]  # square each element
big = [e for e in elems if e > 2]  # keep elements bigger than 2

As you can see from your example, list comprehensions can be nested.

OTHER TIPS

That is a list comprehension. An easier example might be:

evens = [num for num in range(100) if num % 2 == 0]

I'm quite sure i saw that line in some NLP applications.

This list comprehension:

[[word for word in document.lower().split() if word not in stoplist] for document in documents]

is the same as

ending_list = [] # often known as document stream in NLP.
for document in documents: # Loop through a list.
  internal_list = [] # often known as a a list tokens
  for word in document.lower().split():
    if word not in stoplist:
      internal_list.append(word) # this is where the [[word for word...] ...] appears
  ending_list.append(internal_list)

Basically you want a list of documents that contains a list of tokens. So by looping through the documents,

for document in documents:

you then split each document into tokens

  list_of_tokens = []
  for word in document.lower().split():

and then make a list of of these tokens:

    list_of_tokens.append(word)    

For example:

>>> doc = "This is a foo bar sentence ."
>>> [word for word in doc.lower().split()]
['this', 'is', 'a', 'foo', 'bar', 'sentence', '.']

It's the same as:

>>> doc = "This is a foo bar sentence ."
>>> list_of_tokens = []
>>> for word in doc.lower().split():
...   list_of_tokens.append(word)
... 
>>> list_of_tokens
['this', 'is', 'a', 'foo', 'bar', 'sentence', '.']
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top