Question

Scenario: I want to let multiple (2 to 20, probably) server applications use a single database using ADO.NET. I want individual applications to be able to take ownership of sets of records in the database, hold them in memory (for speed) in DataSets, respond to client requests on the data, perform updates, and prevent other applications from updating those records until ownership has been relinquished.

I'm new to ADO.NET, but it seems like this should be possible using transactions with Data Adapters (ADO.NET disconnected layer).

Question part 1: Is that the right way to try and do this?

Question part 2: If that is the right way, can anyone point me at any tutorials or examples of this kind of approach (in C#)?

Question part 3: If I want to be able to take ownership of individual records and release them independently, am I going to need a separate transaction for each record, and by extension a separate DataAdapter and DataSet to hold each record, or is there a better way to do that? Each application will likely hold ownership of thousands of records simultaneously.

Was it helpful?

Solution

  • How long were you thinking of keeping the transaction open for?
  • How many concurrent users are you going to support?

These are two of the questions you need to ask yourself. If the answer for the former is a "long time" and the answer to the latter is "many" then the approach will probably run into problems.

So, my answer to question one is: no, it's probably not the right approach.

If you take the transactional lock approach then you are going to limit your scalability and response times. You could also run into database errors. e.g. SQL Server (assuming you are using SQL Server) can be very greedy with locks and could lock more resources than you request/expect. The application could request some row level locks to lock the records that it "owns" however SQL Server could escalate those row locks to a table lock. This would block and could result in timeouts or perhaps deadlocks.

I think the best way to meet the requirements as you've stated them is to write a lock manager/record checkout system. Martin Fowler calls this a Pessimistic Offline Lock.

UPDATE

If you are using SQL Server 2008 you can set the lock escalation behavior on a table level:

ALTER TABLE T1 SET (LOCK_ESCALATION = DISABLE);

This will disable lock escalation in "most" situations and may help you.

OTHER TIPS

You actually need concurrency control,along with Transaction support.

Transaction only come into picture when you perform multiple operations on database. As soon as the connection is released the transaction is no more applicable.

concurrency lets you work with multiple updates on the same data. If two or more clients hold the same set of data and one needs to read/write the data after another client updates it, the concurrency will let you decide which set of updates to keep and which one to ignore. Mentioning the concept of concurrency is beyond the scope of this article. Checkout this article for more information.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top