Domanda

I'm having trouble regarding speed in a SELECT query on a Postgres database.

I have a table with two integer columns as key: (int1,int2) This table has around 70 million rows.

I need to make two kind of simple SELECT queries in this environment:

SELECT * FROM table WHERE int1=X;
SELECT * FROM table WHERE int2=X;

These two selects returns around 10.000 rows each out of these 70 million. For this to work as fast as possible I thought on using two HASH indexes, one for each column. Unfortunately the results are not that good:

                                                               QUERY PLAN                                                               
----------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on lec_sim  (cost=232.21..25054.38 rows=6565 width=36) (actual time=14.759..23339.545 rows=7871 loops=1)
   Recheck Cond: (lec2_id = 11782)
   ->  Bitmap Index Scan on lec_sim_lec2_hash_ind  (cost=0.00..230.56 rows=6565 width=0) (actual time=13.495..13.495 rows=7871 loops=1)
         Index Cond: (lec2_id = 11782)
 Total runtime: 23342.534 ms
(5 rows)

This is an EXPLAIN ANALYZE example of one of these queries. It is taking around 23 seconds. My expectations are to get this information in less than a second.

These are some parameters of the postgres db config:

work_mem = 128MB
shared_buffers = 2GB
maintenance_work_mem = 512MB
fsync = off
synchronous_commit = off
effective_cache_size = 4GB

Any help, comment or thought would be really appreciated.

Thank you in advance.

È stato utile?

Soluzione

Extracting my comments into an answer: the index lookup here was very fast -- all the time was spent retrieving the actual rows. 23 seconds / 7871 rows = 2.9 milliseconds per row, which is reasonable for retrieving data that's scattered across the disk subsystem. Seeks are slow; you can a) fit your dataset in RAM, b) buy SSDs, or c) organize your data ahead of time to minimize seeks.

PostgreSQL 9.2 has a feature called index-only scans that allows it to (usually) answer queries without accessing the table. You can combine this with the btree index property of automatically maintaining order to make this query fast. You mention int1, int2, and two floats:

CREATE INDEX sometable_int1_floats_key ON sometable (int1, float1, float2);
CREATE INDEX sometable_int2_floats_key ON sometable (int2, float1, float2);

SELECT float1,float2 FROM sometable WHERE int1=<value>; -- uses int1 index
SELECT float1,float2 FROM sometable WHERE int2=<value>; -- uses int2 index

Note also that this doesn't magically erase the disk seeks, it just moves them from query time to insert time. It also costs you storage space, since you're duplicating the data. Still, this is probably the trade-off you want.

Altri suggerimenti

Thank you willglyn. As you noticed, the problem was the seeking through the HD and not looking up for the indexes. You proposed many solutions, like loading the dataset in RAM or buy an SSDs HD. But forgetting about these two, that involve managing things outside the database itself, you proposed two ideas:

  1. Reorganize the data to reduce the seeking of the data.
  2. Use PostgreSQL 9.2 feature "index-only scans"

Since I am under a PostgreSQL 9.1 Server, I decided to take option "1".

I made a copy of the table. So now I have the same table with the same data twice. I created an index for each one, the first one being indexed by (int1) and the second one by (int2). Then I clustered them both (CLUSTER table USING ind_intX) by its respective indexes.

I'm posting now an EXPLAIN ANALYZE of the same query, done in one of these clustered tables:

                                                         QUERY PLAN                                                            
---------------------------------------------------------------------------------------------------------------------------------------------  
Index Scan using lec_sim_lec2id_ind on lec_sim_lec2id  (cost=0.00..21626.82 rows=6604 width=36) (actual time=0.051..1.500 rows=8119 loops=1)
Index Cond: (lec2_id = 12300)  Total runtime:
1.822 ms (3 rows)

Now the seeking is really fast. I went down from 23 seconds to ~2 milliseconds, which is an impressive improvement. I think this problem is solved for me, I hope this might be useful also for others experiencing the same problem.

Thank you so much willglynn.

I had a case of super slow queries where simple one to many joins (in PG v9.1) were performed between a table that was 33 million rows to a child table that was 2.4 billion rows in size. I performed a CLUSTER on the foreign key index for the child table, but found that this didn't solve my problem with query timeouts, for even the simplest of queries. Running ANALYZE also did not solve the issue.

What made a huge difference was performing a manual VACUUM on both the parent table and the child table. Even as the parent table was completing its VACUUM process, I went from 10 minute timeouts to results coming back in one second.

What I am taking away from this is that regular VACUUM operations are still critical, even for v9.1. The reason I did this was that I noticed autovacuum hadn't run on either of the tables for at least two weeks, and lots of upserts and inserts had occurred since then. It may be that I need to improve the autovacuum trigger to take care of this issue going forward, but what I can say is that a 640GB table with a couple of billion rows does perform well if everything is cleaned up. I haven't yet had to partition the table to get good performance.

For a very simple and effective one liner, if you have fast solid-state storage on your postgres machine, try setting:

random_page_cost=1.0

In your in your postgresql.conf.

The default is random_page_cost=4.0 and this is optimized for storage with high seek times like old spinning disks. This changes the cost calculation for seeking and relies less on your memory (which could ultimately be going to swap anyway)

This setting alone improved my filtering query from 8 seconds down to 2 seconds on a long table with a couple million records.

The other major improvement came from making indexes with all of the booleen columns on my table. This reduced the 2 second query to about 1 second. Check @willglynn's answer for that.

Hope this helps!

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top