Question

My understanding of optimistic locking is that it uses a timestamp on each record in a table to determine the "version" of the record, so that when the record is access by multiple processes at the same time, each has a reference to the record's version.

Then, when an update is performed, the timestamp is updated. Before an update is committed, it reads the timestamp on the record a 2nd time. If the timestamp (version) that it has is no longer the timestamp on the record (because it's been updated since the first read), then the process must re-read the entire record and apply the update on the new version of it.

So, if anything I have stated is not correct, please begin by making clarifications for me. But, assuming I'm more or less correct here...

How does this actually manifest itself in a RDBMS? Is this 2nd read/verification enforced in the application logic (the SQL itself) or is it a tuning parameter/configuration that the DBA makes?

I guess I'm wondering where the logic comes from to read the timestamp and perform a 2nd update if the timestamp is stale. So I ask: does the application developer enforce optimistic locking, or is it enforced by the DBA? Either way, how? Thanks in advance!

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top