Question

I understand the differences between optimistic and pessimistic locking. Now could someone explain to me when I would use either one in general?

And does the answer to this question change depending on whether or not I'm using a stored procedure to perform the query?

But just to check, optimistic means "don't lock the table while reading" and pessimistic means "lock the table while reading."

Was it helpful?

Solution

Optimistic Locking is a strategy where you read a record, take note of a version number (other methods to do this involve dates, timestamps or checksums/hashes) and check that the version hasn't changed before you write the record back. When you write the record back you filter the update on the version to make sure it's atomic. (i.e. hasn't been updated between when you check the version and write the record to the disk) and update the version in one hit.

If the record is dirty (i.e. different version to yours) you abort the transaction and the user can re-start it.

This strategy is most applicable to high-volume systems and three-tier architectures where you do not necessarily maintain a connection to the database for your session. In this situation the client cannot actually maintain database locks as the connections are taken from a pool and you may not be using the same connection from one access to the next.

Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks. To use pessimistic locking you need either a direct connection to the database (as would typically be the case in a two tier client server application) or an externally available transaction ID that can be used independently of the connection.

In the latter case you open the transaction with the TxID and then reconnect using that ID. The DBMS maintains the locks and allows you to pick the session back up through the TxID. This is how distributed transactions using two-phase commit protocols (such as XA or COM+ Transactions) work.

OTHER TIPS

Optimistic locking is used when you don't expect many collisions. It costs less to do a normal operation but if the collision DOES occur you would pay a higher price to resolve it as the transaction is aborted.

Pessimistic locking is used when a collision is anticipated. The transactions which would violate synchronization are simply blocked.

To select proper locking mechanism you have to estimate the amount of reads and writes and plan accordingly.

Optimistic assumes that nothing's going to change while you're reading it.

Pessimistic assumes that something will and so locks it.

If it's not essential that the data is perfectly read use optimistic. You might get the odd 'dirty' read - but it's far less likely to result in deadlocks and the like.

Most web applications are fine with dirty reads - on the rare occasion the data doesn't exactly tally the next reload does.

For exact data operations (like in many financial transactions) use pessimistic. It's essential that the data is accurately read, with no un-shown changes - the extra locking overhead is worth it.

Oh, and Microsoft SQL server defaults to page locking - basically the row you're reading and a few either side. Row locking is more accurate but much slower. It's often worth setting your transactions to read-committed or no-lock to avoid deadlocks while reading.

In addition to what's been said already, it should be said that optimistic locking tends to improve concurrency at the expense of predictability. Pessimistic locking tends to reduce concurrency, but is more predictable.

You pays your money, etc

I would think of one more case when pessimistic locking would be a better choice.

For optimistic locking every participant in data modification must agree in using this kind of locking. But if someone modifies the data without taking care about the version column, this will spoil the whole idea of the optimistic locking.

There are basically two most popular answers. The first one basically says

Optimistic needs a three-tier architectures where you do not necessarily maintain a connection to the database for your session whereas Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking you need either a direct connection to the database.

Another answer is

optimistic (versioning) is faster because of no locking but (pessimistic) locking performs better when contention is high and it is better to prevent the work rather than discard it and start over.

or

Optimistic locking works best when you have rare collisions

As it is put on this page.

I created my answer to explain how "keep connection" is related to "low collisions".

To understand which strategy is best for you, think not about the Transactions Per Second your DB has but the duration of a single transaction. Normally, you open trasnaction, performa operation and close the transaction. This is a short, classical transaction ANSI had in mind and fine to get away with locking. But, how do you implement a ticket reservation system where many clients reserve the same rooms/seats at the same time?

You browse the offers, fill in the form with lots of available options and current prices. It takes a lot of time and options can become obsolete, all the prices invalid between you started to fill the form and press "I agree" button because there was no lock on the data you have accessed and somebody else, more agile, has intefered changing all the prices and you need to restart with new prices.

You could lock all the options as you read them, instead. This is pessimistic scenario. You see why it sucks. Your system can be brought down by a single clown who simply starts a reservation and goes smoking. Nobody can reserve anything before he finishes. Your cash flow drops to zero. That is why, optimistic reservations are used in reality. Those who dawdle too long have to restart their reservation at higher prices.

In this optimistic approach you have to record all the data that you read (as in mine Repeated Read) and come to the commit point with your version of data (I want to buy shares at the price you displayed in this quote, not current price). At this point, ANSI transaction is created, which locks the DB, checks if nothing is changed and commits/aborts your operation. IMO, this is effective emulation of MVCC, which is also associated with Optimistic CC and also assumes that your transaction restarts in case of abort, that is you will make a new reservation. A transaction here involves a human user decisions.

I am far from understanding how to implement the MVCC manually but I think that long-running transactions with option of restart is the key to understanding the subject. Correct me if I am wrong anywhere. My answer was motivated by this Alex Kuznecov chapter.

In most cases, optimistic locking is more efficient and offers higher performance. When choosing between pessimistic and optimistic locking, consider the following:

  • Pessimistic locking is useful if there are a lot of updates and relatively high chances of users trying to update data at the same time. For example, if each operation can update a large number of records at a time (the bank might add interest earnings to every account at the end of each month), and two applications are running such operations at the same time, they will have conflicts.

  • Pessimistic locking is also more appropriate in applications that contain small tables that are frequently updated. In the case of these so-called hotspots, conflicts are so probable that optimistic locking wastes effort in rolling back conflicting transactions.

  • Optimistic locking is useful if the possibility for conflicts is very low – there are many records but relatively few users, or very few updates and mostly read-type operations.

One use case for optimistic locking is to have your application use the database to allow one of your threads / hosts to 'claim' a task. This is a technique that has come in handy for me on a regular basis.

The best example I can think of is for a task queue implemented using a database, with multiple threads claiming tasks concurrently. If a task has status 'Available', 'Claimed', 'Completed', a db query can say something like "Set status='Claimed' where status='Available'. If multiple threads try to change the status in this way, all but the first thread will fail because of dirty data.

Note that this is a use case involving only optimistic locking. So as an alternative to saying "Optimistic locking is used when you don't expect many collisions", it can also be used where you expect collisions but want exactly one transaction to succeed.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top