Question

I'm trying to get the postgres full-text-search facility functional.

I have two tables, one I created just for testing, and the actual one I want to be able to search:

Test Table:

webarchive=# \d test_sites
                            Table "public.test_sites"
   Column    |   Type   |                        Modifiers
-------------+----------+---------------------------------------------------------
 id          | integer  | not null default nextval('test_sites_id_seq'::regclass)
 content     | text     |
 tsv_content | tsvector |
Indexes:
    "test_sites_pkey" PRIMARY KEY, btree (id)
    "idx_test_web_pages_content" gin (tsv_content)
Triggers:
    web_pages_testing_content_change_trigger AFTER INSERT OR UPDATE ON test_sites FOR EACH ROW EXECUTE PROCEDURE web_pages_testing_content_update_func()

"Real" Table:

webarchive=# \d web_pages
                                      Table "public.web_pages"
    Column    |            Type             |                       Modifiers
--------------+-----------------------------+--------------------------------------------------------
 id           | integer                     | not null default nextval('web_pages_id_seq'::regclass)
 state        | dlstate_enum                | not null
 errno        | integer                     |
 url          | text                        | not null
 starturl     | text                        | not null
 netloc       | text                        | not null
 file         | integer                     |
 priority     | integer                     | not null
 distance     | integer                     | not null
 is_text      | boolean                     |
 limit_netloc | boolean                     |
 title        | citext                      |
 mimetype     | text                        |
 type         | itemtype_enum               |
 raw_content  | text                        |
 content      | text                        |
 fetchtime    | timestamp without time zone |
 addtime      | timestamp without time zone |
 tsv_content  | tsvector                    |
Indexes:
    "web_pages_pkey" PRIMARY KEY, btree (id)
    "ix_web_pages_url" UNIQUE, btree (url)
    "idx_web_pages_content" gin (tsv_content)
    "idx_web_pages_title" gin (to_tsvector('english'::regconfig, title::text))
    "ix_web_pages_distance" btree (distance)
    "ix_web_pages_distance_filtered" btree (priority) WHERE state = 'new'::dlstate_enum AND distance < 1000000
    "ix_web_pages_priority" btree (priority)
    "ix_web_pages_type" btree (type)
    "ix_web_pages_url_ops" btree (url text_pattern_ops)
Foreign-key constraints:
    "web_pages_file_fkey" FOREIGN KEY (file) REFERENCES web_files(id)
Triggers:
    web_pages_content_change_trigger AFTER INSERT OR UPDATE ON web_pages FOR EACH ROW EXECUTE PROCEDURE web_pages_content_update_func()

Extra bits aside, both have a content column, and a tsv_content column with a gin() index on it. There is a trigger that updates the tsv_content column every time the content column is modified.

Note that the other gin index works fine, and I actually initially had a gin (to_tsvector('english'::regconfig, content::text)) index on the contents column as well, instead of the second column, but after waiting for that index to rebuild a few times in testing, I decided to use a separate column to pre-store the tsvector values.

Executing a query against the test table uses the index like I would expect:

webarchive=# EXPLAIN ANALYZE SELECT
    test_sites.id,
    test_sites.content,
    ts_rank_cd(test_sites.tsv_content, to_tsquery($$testing$$)) AS ts_rank_cd_1
FROM
    test_sites
WHERE
    test_sites.tsv_content @@ to_tsquery($$testing$$);
                                                              QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on test_sites  (cost=16.45..114.96 rows=25 width=669) (actual time=0.175..3.720 rows=143 loops=1)
   Recheck Cond: (tsv_content @@ to_tsquery('testing'::text))
   Heap Blocks: exact=117
   ->  Bitmap Index Scan on idx_test_web_pages_content  (cost=0.00..16.44 rows=25 width=0) (actual time=0.109..0.109 rows=143 loops=1)
         Index Cond: (tsv_content @@ to_tsquery('testing'::text))
 Planning time: 0.414 ms
 Execution time: 3.800 ms
(7 rows)

However, the exact same query on the real table never seems to result in anything but a plain old sequential scan:

webarchive=# EXPLAIN ANALYZE SELECT
       web_pages.id,
       web_pages.content,
       ts_rank_cd(web_pages.tsv_content, to_tsquery($$testing$$)) AS ts_rank_cd_1
   FROM
       web_pages
   WHERE
       web_pages.tsv_content @@ to_tsquery($$testing$$);
                                                       QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
 Seq Scan on web_pages  (cost=0.00..4406819.80 rows=19751 width=505) (actual time=0.343..142325.954 rows=134949 loops=1)
   Filter: (tsv_content @@ to_tsquery('testing'::text))
   Rows Removed by Filter: 12764373
 Planning time: 0.436 ms
 Execution time: 142341.489 ms
(5 rows)

I've increased my work memory to 3 GB to see if that was the issue, and it is not.

Additionally, it should be noted that these are fairly large tables - ~150GB of text across 4M rows (with 8M additional rows where content/tsv_content is NULL).

The test_sites table has 1/1000th of the rows of web_pages, as it is slightly prohibitive to experiment with when every query takes multiple minutes.


I'm using postgresql 9.5 (yes, I compiled it myself, I wanted ON CONFLICT). There doesn't seem to be a tag for that yet.

I have read through the open issues with 9.5, and I can't see this being a result of any of them.


Fresh from a complete rebuild of the index, the problem still exists:

webarchive=# ANALYZE web_pages ;
ANALYZE
webarchive=# EXPLAIN ANALYZE SELECT
    web_pages.id,
    web_pages.content,
    ts_rank_cd(web_pages.tsv_content, to_tsquery($$testing$$)) AS ts_rank_cd_1
FROM
    web_pages
WHERE
    web_pages.tsv_content @@ to_tsquery($$testing$$);
                                                              QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
 Seq Scan on web_pages  (cost=10000000000.00..10005252343.30 rows=25109 width=561) (actual time=7.114..146444.168 rows=134949 loops=1)
   Filter: (tsv_content @@ to_tsquery('testing'::text))
   Rows Removed by Filter: 13137318
 Planning time: 0.521 ms
 Execution time: 146465.188 ms
(5 rows)

Note that I literally just ANALYZEed, and seqscan is disabled.

Was it helpful?

Solution

Well, I spent a bit of time making some extra space on the disk with the DB on it, moving some other databases off to another SSD.

I then ran VACUUM ANALYZE across the entire database, and now apparently it noticed I have the index.

I had previously both analyzed and vacuumed just this table, but apparently it somehow made a difference to do it in general rather then to a specific table. Go figure.

webarchive=# EXPLAIN ANALYZE SELECT
    web_pages.id,
    web_pages.content
FROM
    web_pages
WHERE
    web_pages.tsv_content @@ to_tsquery($$testing$$);
                                                                 QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on web_pages  (cost=1185.79..93687.30 rows=23941 width=189) (actual time=41.448..152.108 rows=134949 loops=1)
   Recheck Cond: (tsv_content @@ to_tsquery('testing'::text))
   Heap Blocks: exact=105166
   ->  Bitmap Index Scan on idx_web_pages_content  (cost=0.00..1179.81 rows=23941 width=0) (actual time=24.940..24.940 rows=134996 loops=1)
         Index Cond: (tsv_content @@ to_tsquery('testing'::text))
 Planning time: 0.452 ms
 Execution time: 154.942 ms
(7 rows)

I also took the opportunity to run a VACUUM FULL; now that I have enough space for the processing. I've had a fair bit of row-churn in the table as I've been experimenting during development and I'd like to try to consolidate any file fragmentation that has resulted from that.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top