Question

I think I am constantly improving my previous question. Basically, I would need to chunk up a large text (csv) file to send pieces to a multiprocess.Pool. To do so, I think I need at iterable object where the lines can be iterated over. (see how to multiprocess large text files in python?)

Now I realized that the file object itself (or an _io.TextIOWrapper type) after you open a textfile is iterable line by line, so perhaps my chunking code (now below, sorry for missing it previously) could chunk it, if it could get its length? But if it's iterable, why can't I simply call its length (by lines, not bytes)?

Thanks!

def chunks(l,n):
    """Divide a list of nodes `l` in `n` chunks"""
    l_c = iter(l)
    while 1:
        x = tuple(itertools.islice(l_c,n))
        if not x:
            return
        yield x
Was it helpful?

Solution

The reason files are iterable, is that they are read in series. The length of a file, in lines, can't be calculated unless the whole file is processed. (The file's length in bytes is no indicator to how many lines it has.)

The problem is that, if the file were Gigabytes long, you might not want to read it twice if it could be helped.

That is why it is better to not know the length; that is why one should deal with data files as an Iterable rather than a collection/vector/array that has a length.

Your chunking code should be able to deal directly with the file object itself, without knowing its length.

However if you wanted to know the number of lines before processing fully, your 2 options are

  1. buffer the whole file into an array of lines first, then pass these lines to your chunker
  2. read it twice over, the first time discarding all the data, just recording the lines
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top