Pregunta

I have to handle a 7 millions keys dictionary (the number of keys can eventually be up to ~50 millions). Since I have barely enough ram to keep it in memory I've decided to store it.

My dictionary looks like this:

dictionary={(int1,int2):int3,...}

First I tried to store it in a sqlite database using sqlite3. The amount of time required to store it is perfectly ok (around 70 secs). Using timeit:

>>>import sqlite3
>>>conn=sqlite3.connect('test_sqlite.sqlite')
>>>c=conn.cursor()
>>>c.execute('create table test (int1 int, int2 int, int3 int)')
>>>conn.commit()
>>>conn.close()
>>>import timeit
>>>timeit.timeit('c.executemany("insert into test values (?,?,?)",((key[0],key[1],dictionary[key]) for key in dictionary.iterkeys())),setup='import sqlite3;conn=sqlite3.connect("test_sqlite.sqlite");c=conn.cursor();dictionary={(i,i+1):i+2 for i in xrange(7000000)}',number=1)
70.7033872604

But then, I need to use this stored dictionary in order to retrieve certain values, but each SELECT seems to take approximately 1.5 secs. Since I need to access around one million values it is discouraging:

>>>timeit.timeit('c.execute("select id1 from test where id2=={}".format(value)).fetchone()[0]',setup=import sqlite3;conn=sqlite3.connect("test_sqlite.sqlite");c=conn.cursor();value=5555',number=1)
1.5300869941711426

Then I tried to update my dictionary in a shelf. Now the amount of time to get a value in my shelved dictionary is fairly good:

>>> timeit.timeit('a=f[key]',setup='import shelve;f=shelve.open("test_timeit","r");key="1000"',number=10000)
0.320019006729126

So even though I do several millions requests like this one, the total amount of time should be around a hundred of secs.

But a new problem arose, for now the time required to store my dictionary in a shelf doesn't satisfie me.

>>> timeit.timeit('f.update(dictio)',setup='import shelve;f=shelve.open("test_timeit","c");dictio={"({},{})".format(i,i+1):i+2 for i in xrange(7000000)}',number=1)
504.728841782

One must add to this amount a time extra time required to convert the former keys (which are tuples) to string. Using repr:

>>>timeit.timeit('repr.repr((1,2))',setup='import repr',number=7000000)
61.6035461426

Which makes a total of 566.332387924 to update my dictionary into a shelf ...

I don't want to pickle my dictionary, since it implies that I'll have to load the whole dictionary if I want to use it later.

Is there any way I can improve one of these two methods in order to have better access times/loading times ?

Thanks for your help !

¿Fue útil?

Solución

For queries on large tables like this to return quickly, you need to index the relevant columns. In your case I would add this as the primary key.

create table test (
    Int1 integer,
    Int2 integer,
    Int3 integer,
    Primary key (int1, int2)
)
Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top