Question

I am protecting critical transactions with application mutexes and SERIALIZABLE.

I presume that this makes these operations as acid as possible, but now I'm concerned that reads might cause failures to those write transactions because of locking.

The reads are in libpqxx prepared statement transactions. I have read the docs, wiki, and white paper, but I can't find out how to configure a read transaction so that it never risks a failure of another critical serializable write transactions.

I cannot determine if locks will cause serialization failures. Can they?

The reads do not need to be perfectly acid because the application compensates for race conditions, old data, etc. The primary concern is that the serializable critical write transactions do not fail due to other reads.

How should a read only prepared statement be configured so absolutely not to risk another serializable transaction write failure?

Was it helpful?

Solution

A READ ONLY transaction cannot cause a write transaction (that does not perform DDL) to fail, unless it explicitly uses LOCK TABLE or advisory locks.

READ ONLY transactions cannot SELECT ... FOR SHARE or SELECT ... FOR UPDATE. As they can't do DML, the strongest lock they can take on a table is ACCESS SHARE, which conflicts only with the ACCESS EXCLUSIVE lock taken by DDL.

Nor can a read-only transaction cause serialization failures if the write transaction is SERIALIZABLE, because serialization failures require that both transactions perform writes. It is always possible to logically serialize a read only transaction either before or after a read/write transaction, as it is impossible for them to be mutually interdependent.

So: It should be fine to use READ COMMITTED or SERIALIZABLE transactions, with READ ONLY, so long as you do not explicitly LOCK TABLE.

You also need to make sure you don't use advisory locks that might interact between the two sets of transactions. Most likely you don't use advisory locks at all and can forget about this entirely.

Separately, though, an application must be prepared to deal with serialization failures or other transaction aborts. Any design that tries to avoid this is broken. Transactions can be aborted because of OS / host machine level issues, admin action, etc. Do not rely on transactions that "cannot fail". If you absolutely must do this, you need to use two-phase commit, where you PREPARE TRANSACTION (at which point it's guaranteed that the tx cannot fail to commit), do the other work that relies on the tx committing safely, then COMMIT PREPARED. If something goes wrong with the other work, you can ROLLBACK PREPARED. 2PC has significant overheads and is best avoided when you can, but sometimes there's just no choice.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top