Question

I have a project where I'm trying to use the PostgreSQL ON CONFLICT DO UPDATE clause, and I'm having huge numbers of deadlocking issues.

My schema is as follows:

webarchive=# \d web_pages
                                               Table "public.web_pages"
      Column       |            Type             |                              Modifiers
-------------------+-----------------------------+---------------------------------------------------------------------
 id                | integer                     | not null default nextval('web_pages_id_seq'::regclass)
 state             | dlstate_enum                | not null
 errno             | integer                     |
 url               | text                        | not null
 starturl          | text                        | not null
 netloc            | text                        | not null
 file              | integer                     |
 priority          | integer                     | not null
 distance          | integer                     | not null
 is_text           | boolean                     |
 limit_netloc      | boolean                     |
 title             | citext                      |
 mimetype          | text                        |
 type              | itemtype_enum               |
 content           | text                        |
 fetchtime         | timestamp without time zone |
 addtime           | timestamp without time zone |
 tsv_content       | tsvector                    |
 normal_fetch_mode | boolean                     | default true
 ignoreuntiltime   | timestamp without time zone | not null default '1970-01-01 00:00:00'::timestamp without time zone
Indexes:
    "web_pages_pkey" PRIMARY KEY, btree (id)
    "ix_web_pages_url" UNIQUE, btree (url)
    "idx_web_pages_title" gin (to_tsvector('english'::regconfig, title::text))
    "ix_web_pages_distance" btree (distance)
    "ix_web_pages_distance_filtered" btree (priority) WHERE state = 'new'::dlstate_enum AND distance < 1000000 AND normal_fetch_mode = true
    "ix_web_pages_id" btree (id)
    "ix_web_pages_netloc" btree (netloc)
    "ix_web_pages_priority" btree (priority)
    "ix_web_pages_state" btree (state)
    "ix_web_pages_url_ops" btree (url text_pattern_ops)
    "web_pages_state_netloc_idx" btree (state, netloc)
Foreign-key constraints:
    "web_pages_file_fkey" FOREIGN KEY (file) REFERENCES web_files(id)
Triggers:
    update_row_count_trigger BEFORE INSERT OR UPDATE ON web_pages FOR EACH ROW EXECUTE PROCEDURE web_pages_content_update_func()

My update command is the following:

INSERT INTO
    web_pages
    (url, starturl, netloc, distance, is_text, priority, type, fetchtime, state)
VALUES
    (:url, :starturl, :netloc, :distance, :is_text, :priority, :type, :fetchtime, :state)
ON CONFLICT (url) DO
    UPDATE
        SET
            state     = EXCLUDED.state,
            starturl  = EXCLUDED.starturl,
            netloc    = EXCLUDED.netloc,
            is_text   = EXCLUDED.is_text,
            distance  = EXCLUDED.distance,
            priority  = EXCLUDED.priority,
            fetchtime = EXCLUDED.fetchtime
        WHERE
            web_pages.fetchtime < :threshtime
        AND
            web_pages.url = EXCLUDED.url
    ;

(Note: parameters are escaped via the SQLAlchemy parameterized query style)

I'm seeing dozens of deadlock errors, even under relatively light concurrency (6 workers):

Main.SiteArchiver.Process-5.MainThread - WARNING - SQLAlchemy OperationalError - Retrying.
Traceback (most recent call last):
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
    context)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
    cursor.execute(statement, parameters)
psycopg2.extensions.TransactionRollbackError: deadlock detected
DETAIL:  Process 11391 waits for ShareLock on transaction 40632808; blocked by process 11389.
Process 11389 waits for ShareLock on transaction 40632662; blocked by process 11391.
HINT:  See server log for query details.
CONTEXT:  while inserting index tuple (743427,2) in relation "web_pages"


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/media/Storage/Scripts/ReadableWebProxy/WebMirror/Engine.py", line 558, in upsertResponseLinks
    self.db_sess.execute(cmd, params=new)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 1034, in execute
    bind, close_with_result=True).execute(clause, params or {})
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 914, in execute
    return meth(self, multiparams, params)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
    context)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
    exc_info
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 183, in reraise
    raise value.with_traceback(tb)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
    context)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.extensions.TransactionRollbackError) deadlock detected
DETAIL:  Process 11391 waits for ShareLock on transaction 40632808; blocked by process 11389.
Process 11389 waits for ShareLock on transaction 40632662; blocked by process 11391.
HINT:  See server log for query details.
CONTEXT:  while inserting index tuple (743427,2) in relation "web_pages"
 [SQL: '         INSERT INTO          web_pages          (url, starturl, netloc, distance, is_text, priority, type, fetchtime, state)         VALUES          (%(url)s, %(starturl)s, %(netloc)s, %(distance)s, %(is_text)s, %(priority)s, %(type)s, %(fetchtime)s, %(state)s)         ON CONFLICT (url) DO          UPDATE           SET            state     = EXCLUDED.state,            starturl  = EXCLUDED.starturl,            netloc    = EXCLUDED.netloc,            is_text   = EXCLUDED.is_text,            distance  = EXCLUDED.distance,            priority  = EXCLUDED.priority,            fetchtime = EXCLUDED.fetchtime           WHERE            web_pages.fetchtime < %(threshtime)s          ;         '] [parameters: {'url': 'xxxxxx', 'is_text': True, 'netloc': 'xxxxxx', 'distance': 1000000, 'priority': 10000, 'threshtime': datetime.datetime(2016, 4, 24, 0, 38, 10, 778866), 'state': 'new', 'starturl': 'xxxxxxx', 'type': 'unknown', 'fetchtime': datetime.datetime(2016, 4, 24, 0, 38, 10, 778934)}]

My transaction isolation level is REPEATABLE READ, so my understanding of how the DB should work is that I'd see lots of serializaiton errors, but deadlocks shouldn't occur because if two transactions change the same row, the later transaction should simply fail.

My guess here is that the UPDATE is somehow locking against the INSERT query (or something like that), and I need to put a synchronization point (?) somewhere, but I don't understand the scoping of the various query components well enough to do any troubleshooting other then just changing things at random and seeing what effect that has. I've done some reading, but the PostgreSQL documentation is extremely abstract, and the ON CONFLICT xxx terminology doesn't seem that broadly used yet, so there aren't that many resources for practical troubleshooting, particularly for non-SQL experts.

How can I try to resolve this issue? I've also experimented with other isolation levels (READ COMMITTED, SERIALIZABLE) without avail.

Was it helpful?

Solution

Deadlocks are not caused by a particular statement. It is caused by concurrency issues. So basically, you should start observing how one session of your application deal with from other sessions working concurrently.

Here is a general guideline for avoiding deadlocks:

  1. Always maintain primary keys on tables. This primary key should be the means to identify a particular record in the table. This will avoid too many rows getting within the scope of lock.
  2. Maintain an order in all the transactions. For example, one of your application logic inserts/updates data in table A then table B. There should not be another logic which inserts/updates data in Table B then Table A.
  3. Monitor and catch the culprits. PostgreSQL provides pg_stat_activty and pg_stat_statements like views to monitor sessions and queries. Here is some sample queries using which you can monitor the blocking / deadlocks.https://wiki.postgresql.org/wiki/Lock_Monitoring You may need to adjust log_lock_waits and deadlock_timeout parameters.

    1. Acquire most restrictive lock first in the transaction. so smaller ones won't come along the way.

    2. Last but the least, reduce the size of the transaction. commit more often. long running transactions has more chance of ending up in deadlocks. Moreover long transactions in postgres holds more number of live tuples because of the way MVCC is implemented in Postgres.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top