Question

I'm iterating through a very large tab-delimited file (containing millions of lines) and pairing different lines of it based on the value of some field in that file, e.g.

mydict = defaultdict()
for line in myfile:
  # Group all lines that have the same field into a list
  mydict[line.field].append(line)

Since "mydict" gets very large, I'd like to make it into an iterator so I don't have to hold it all in memory. How can I make it so instead of populating a dictionary, I will create an iterator that I can loop through and get all these lists of lines that have the same field value?

Thanks.

Was it helpful?

Solution

"millions of lines" is not very large unless the lines are long. If the lines are long you might save some memory by storing only positions in the file (.tell()/.seek()).

If the file is sorted by line.field; you could use itertools.groupby().

SQL’s GROUP BY might help for average-sized files (e.g., using sqlite as @wisty suggested).

For really large files you could use MapReduce.

OTHER TIPS

It sounds like you might want a database. There's a variety of relational and non-relational databases you can pick from (some more efficient than others, depending on what you are trying to achieve), but sqlite (built into python) would be the easiest.

Or, if there are only a small number of line.fields to process, you could just read the files several times.

But there's no real magic bullet.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top