Question

I have a binary file with a particular format, described here for those that are interested. The format isn't the import thing. I can read and convert this data into the form that I want but the problem is these binary files tend to have a lot of information in them. If I am just returning the bytes as read, this is very quick (less than 1 second), but I can't do anything useful with those bytes, they need to be converted into genotypes first and that is the code that appears to be slowing things down.

The conversion for a series of bytes into genotypes is as follows

        h = ['%02x' % ord(b) for b in currBytes]
        b = ''.join([bin(int(i, 16))[2:].zfill(8)[::-1] for i in h])[:nBits]
        genotypes = [b[i:i+2] for i in range(0, len(b), 2)]
        map = {'00': 0, '01': 1, '11': 2, '10': None}
        return  [map[i] for i in genotypes]

What I am hoping is that there is a faster way to do this? Any ideas? Below are the results of running python -m cProfile test.py where test.py is calling a reader object I have written to read these files.

vlan1711:src davykavanagh$ python -m cProfile test.py
183, 593483, 108607389, 366, 368, 46
that took 93.6410450935
         86649088 function calls in 96.396 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    1.248    1.248    2.753    2.753 plinkReader.py:13(__init__)
        1    0.000    0.000    0.000    0.000 plinkReader.py:47(plinkReader)
        1    0.000    0.000    0.000    0.000 plinkReader.py:48(__init__)
        1    0.000    0.000    0.000    0.000 plinkReader.py:5(<module>)
        1    0.000    0.000    0.000    0.000 plinkReader.py:55(__iter__)
   593484   77.634    0.000   91.477    0.000 plinkReader.py:58(next)
        1    0.000    0.000    0.000    0.000 plinkReader.py:71(SNP)
   593483    1.123    0.000    1.504    0.000 plinkReader.py:75(__init__)
        1    0.000    0.000    0.000    0.000 plinkReader.py:8(plinkFiles)
        1    0.000    0.000    0.000    0.000 plinkReader.py:85(Person)
      183    0.000    0.000    0.001    0.000 plinkReader.py:89(__init__)
        1    2.166    2.166   96.396   96.396 test.py:5(<module>)
 27300218    5.909    0.000    5.909    0.000 {bin}
   593483    0.080    0.000    0.080    0.000 {len}
        1    0.000    0.000    0.000    0.000 {math.ceil}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
        2    0.000    0.000    0.000    0.000 {method 'format' of 'str' objects}
   593483    0.531    0.000    0.531    0.000 {method 'join' of 'str' objects}
   593485    0.588    0.000    0.588    0.000 {method 'read' of 'file' objects}
   593666    0.257    0.000    0.257    0.000 {method 'rsplit' of 'str' objects}
   593666    0.125    0.000    0.125    0.000 {method 'rstrip' of 'str' objects}
 27300218    4.098    0.000    4.098    0.000 {method 'zfill' of 'str' objects}
        3    0.000    0.000    0.000    0.000 {open}
 27300218    1.820    0.000    1.820    0.000 {ord}
   593483    0.817    0.000    0.817    0.000 {range}
        2    0.000    0.000    0.000    0.000 {time.time}
Was it helpful?

Solution

You are slowing things down by creating lists and large strings you don't need. You are just examining bits of the bytes and convert two-bit groups into numbers. That can be achieved much simpler, e. g. by this code:

def convert(currBytes, nBits):
  for byte in currBytes:
    for p in range(4):
      bits = (ord(byte) >> (p*2)) & 3
      yield None if bits == 1 else 1 if bits == 2 else 2 if bits == 3 else 0
      nBits -= 2
      if nBits <= 0:
        raise StopIteration()

In case you really need a list in the end, just use

list(convert(currBytes, nBits))

But I guess there can be cases in which you just want to iterate over the results:

for blurp in convert(currBytes, nBits):
  # handle your blurp (0, 1, 2, or None)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top