Question

My problem is I can't figure out how to display the word count using the dictionary and refer to keys length. For example, consider the following piece of text:

   "This is the sample text to get an idea!. "

Then the required output would be

3 2
2 3
0 5

as there are 3 words of length 2, 2 words of length 3, and 0 words of length 5 in the given sample text.

I got as far as displaying the list the word occurrence frequency:

def word_frequency(filename):
    word_count_list = []
    word_freq = {}
    text = open(filename, "r").read().lower().split()
    word_freq = [text.count(p) for p in text]
    dictionary = dict(zip(text,word_freq))
    return dictionary

print word_frequency("text.txt")

which diplays the dict in this format:

{'all': 3, 'show': 1, 'welcomed': 1, 'not': 2, 'availability': 1, 'television,': 1, '28': 1, 'to': 11, 'has': 2, 'ehealth,': 1, 'do': 1, 'get': 1, 'they': 1, 'milestone': 1, 'kroes,': 1, 'now': 3, 'bringing': 2, 'eu.': 1, 'like': 1, 'states.': 1, 'them.': 1, 'european': 2, 'essential': 1, 'available': 4, 'because': 2, 'people': 3, 'generation': 1, 'economic': 1, '99.4%': 1, 'are': 3, 'eu': 1, 'achievement,': 1, 'said': 3, 'for': 3, 'broadband': 7, 'networks': 2, 'access': 2, 'internet': 1, 'across': 2, 'europe': 1, 'subscriptions': 1, 'million': 1, 'target.': 1, '2020,': 1, 'news': 1, 'neelie': 1, 'by': 1, 'improve': 1, 'fixed': 2, 'of': 8, '100%': 1, '30': 1, 'affordable': 1, 'union,': 2, 'countries.': 1, 'products': 1, 'or': 3, 'speeds': 1, 'cars."': 1, 'via': 1, 'reached': 1, 'cloud': 1, 'from': 1, 'needed': 1, '50%': 1, 'been': 1, 'next': 2, 'households': 3, 'commission': 5, 'live': 1, 'basic': 1, 'was': 1, 'said:': 1, 'more': 1, 'higher.': 1, '30mbps': 2, 'that': 4, 'but': 2, 'aware': 1, '50mbps': 1, 'line': 1, 'statement,': 1, 'with': 2, 'population': 1, "europe's": 1, 'target': 1, 'these': 1, 'reliable': 1, 'work': 1, '96%': 1, 'can': 1, 'ms': 1, 'many': 1, 'further.': 1, 'and': 6, 'computing': 1, 'is': 4, 'it': 2, 'according': 1, 'have': 2, 'in': 5, 'claimed': 1, 'their': 1, 'respective': 1, 'kroes': 1, 'areas.': 1, 'responsible': 1, 'isolated': 1, 'member': 1, '100mbps': 1, 'digital': 2, 'figures': 1, 'out': 1, 'higher': 1, 'development': 1, 'satellite': 4, 'who': 1, 'connected': 2, 'coverage': 2, 'services': 2, 'president': 1, 'a': 1, 'vice': 1, 'mobile': 2, "commission's": 1, 'points': 1, '"access': 1, 'rural': 1, 'the': 16, 'agenda,': 1, 'having': 1}
Was it helpful?

Solution

def freqCounter(infilepath):
    answer = {}
    with open(infilepath) as infile:
        for line in infilepath:
            for word in line.strip().split():
                l = len(word)
                if l not in answer:
                    answer[l] = 0
                answer[l] += 1
    return answer

AlternativelyL

import collections
def freqCounter(infilepath):
    with open(infilepath) as infile:
        return collections.Counter(len(word) for line in infile for word in line.strip().split())

OTHER TIPS

Use collections.Counter

import collections

sentence = "This is the sample text to get an idea"

Count = collections.Counter([len(a) for a in sentence.split()])

print Count

To count how many words in a text have given lengths: size -> frequency distribution, you could use a regular expression to extract words:

#!/usr/bin/env python3
import re
from collections import Counter

text = "This is the sample text to get an idea!. "
words = re.findall(r'\w+', text.casefold())
frequencies = Counter(map(len, words)).most_common() 
print("\n".join(["%d word(s) of length %d" % (n, length) 
                 for length, n in frequencies]))

Output

3 word(s) of length 2
3 word(s) of length 4
2 word(s) of length 3
1 word(s) of length 6

Note: It ignores the punctuation such as !. after 'idea' unlike .split()-based solutions automatically.

To read words from a file, you could read lines and extract words from them in the same way as it done for text in the first code example:

from itertools import chain

with open(filename) as file:
    words = chain.from_iterable(re.findall(r'\w+', line.casefold())
                                for line in file)
    # use words here.. (the same as above)
    frequencies = Counter(map(len, words)).most_common()

print("\n".join(["%d word(s) of length %d" % (n, length) 
                 for length, n in frequencies]))

In practice, you could use a list to find the length frequency distribution if you ignore words that are longer than a threshold:

def count_lengths(words, maxlen=100):
    frequencies = [0] * (maxlen + 1)
    for length in map(len, words):
        if length <= maxlen:
            frequencies[length] += 1
    return frequencies

Example

import re

text = "This is the sample text to get an idea!. "
words = re.findall(r'\w+', text.casefold())
frequencies = count_lengths(words)
print("\n".join(["%d word(s) of length %d" % (n, length) 
                 for length, n in enumerate(frequencies) if n > 0]))

Output

3 word(s) of length 2
2 word(s) of length 3
3 word(s) of length 4
1 word(s) of length 6
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top