Question

i wrote this block of code to get tons of blast results but it seems little slow because i use two 'for' loops to iterate over two files.So i'm wondering if theres a faster,greedy way to narrow down the iteration.

Here's the code

for tf_line in SeqIO.parse('deneme2.txt','fasta'):
    tf_line.description=tf_line.description.split()
    tempfile=open('tempfile.txt','w')
    for cd_line in SeqIO.parse('Mus_musculus.GRCm38.74.cdna.all.fa','fasta'):
        if cd_line.id==tf_line.description[1]:
            tempfile.write('>'+cd_line.id+'\n'+
                str(cd_line.seq)[int(tf_line.description[2])-100:
                                 int(tf_line.description[3])+100])
            tempfile.close()
            os.system('makeblastdb -in tempfile.txt -dbtype nucl '
                      '-out tempfile.db -title \'tempfile\'')
            cline = NcbiblastnCommandline(query='SRR029153.fasta' ,
                                          db="tempfile.db",
                                          outfmt=7,
                                          out=(tf_line.description[0]+' '+
                                               tf_line.description[1]))
            stdout,stderr=cline()

'deneme.txt' is 30 Mb big and something like this:

SRR029153.93098 ENSMUST00000103567 999 1147 TCAGGCCAAGTTTCTCTC

SRR029153.83280 ENSMUST00000181483 151 425 CAGGTTGAC

SRR029153.108993 ENSMUST00000184883 174 1415 TGGCACCTTTGC .....

'Mus_musculus.GRCm38.74.cdna.all.fa' file is 170 Mb big and something like this:

ENSMUST00000181483 ACACTGAAGAT.....

ENSMUST00000184883 ATCTTTTTTCTTTCAGGG.....

'Mus_musculus.GRCm38.74.cdna.all.fa' file has some sequence id's(ENSMUST...).I must find the matches between 'deneme.txt' file and 'Mus_musculus.GRCm38.74.cdna.all.fa.

It should take 4-5 hours but with this code it takes at least 10 hours

Any help would be appreciated because i must get rid of brutal algorithms like this and be greedier. Thanks

Was it helpful?

Solution

I think this is still producing the same blasts but should be much faster. Read the comments in the code to to some more optimizing:

tf_data = {key: (int(val1), int(val2)) for key, val1, val2 in
           (line.description.split() for line in
            SeqIO.parse('deneme2.txt','fasta'))}

for cd_line in SeqIO.parse('Mus_musculus.GRCm38.74.cdna.all.fa','fasta'):
    if cd_line.id in tf_data;
        tempfile=open('tempfile.txt','w')

        tf_val1, tf_va2 = tf_data[cd_line.id]

        #If it is likely that the same tf_data-record is used many times
        #move the math to the first line, if on the other hand it is
        #very likely that most records won't be used in tf_data then
        #move the int-casts back to the line below
        tempfile.write('>{0}\n{1}'.format(
            cd_line.id,
            str(cd_line.seq)[tf_val1 - 100: tf_val2 + 100]))

        tempfile.close()
        os.system('makeblastdb -in tempfile.txt -dbtype nucl '
                  '-out tempfile.db -title \'tempfile\'')
        cline = NcbiblastnCommandline(
            query='SRR029153.fasta',
            db="tempfile.db",
            outfmt=7,
            out=("{0} {1}".format(tf_val1, tf_val2)))

        #Since not using stderr and stdout don't assign variables
        cline()
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top