Question

In the application that I'm currently working on, it is possible for multiple users to want to edit something at the same time which means that we need to implement optimistic locking. However, for this application, the item being edited is a scientific protocol that contains records from multiple different tables in the database.

As such, we want to be able to indicate the entire protocol has been locked for editing by a single user which leads to my question: would the preferred method of doing this be to event the edits at the database level (e.g. have a table with the unique id of the protocol and check to see if it has been locked) or would it be accepted to track the currently locked protocols on the web server itself in memory?

Currently we only anticipate around 100 users (20 or so simultaneous) for the application, but that number may increase in the future so we are looking to use the most scalable option.

Was it helpful?

Solution

This question also really hinges on how well architected is your codebase?

If all the calls to modify those records go through a single point of entry then yes I recommend keeping all locking code entirely in your application so you can keep your database as a dumb data store.

If you have multiple points of entry that can modify your tables you're going to need to implement locking at the database level.

OTHER TIPS

I would implement a table in the database that managed the locking. For instance:

Rows: ProtocolID, EditingBeginDate, EditingEndDate

Then you can simply query when an edit function in the application is attempted and if the given protocol doesn't have a completed time frame then you know that it is still being edited by a given user. You can implement a specific amount of time that the editing sessions should be closed as to prevent records from being permanently locked. Just a suggestion :-D

So, it seems you need coarse grained locks.

If you want to have most scalable solution, you need to keep locking information either in database or distributed cache (distributed cache will be faster in this case). In-memory approach is not scalable at all - it is going to fail in case you need more servers. Don't forget also to introduce lock time-outs to prevent possible deadlocks.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top