Domanda

I have a EF code first context which represents a queue of jobs which a processing application can retrieve and run. These processing applications can be running on different machines but pointing at the same database.

The context provides a method that returns a QueueItem if there is any work to do, or null, called CollectQueueItem.

To ensure no two applications can pick up the same job, the collection takes place in a transaction with an ISOLATION LEVEL of REPEATABLE READ. This means that if there are two attempts to pick up the same job at the same time, one will be chosen as the deadlock victim and be rolled back. We can handle this by catching the DbUpdateException and return null.

Here is the code for the CollectQueueItem method:

public QueueItem CollectQueueItem()
{
    using (var transaction = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.RepeatableRead }))
    {
        try
        {
            var queueItem = this.QueueItems.FirstOrDefault(qi => !qi.IsLocked);

            if (queueItem != null)
            {
                queueItem.DateCollected = DateTime.UtcNow;
                queueItem.IsLocked = true;

                this.SaveChanges();

                transaction.Complete();

                return queueItem;
            }
        }
        catch (DbUpdateException) //we might have been the deadlock victim. No matter.
        { }

        return null;
    }
}

I ran a test in LinqPad to check that this is working as expected. Here is the test below:

var ids = Enumerable.Range(0, 8).AsParallel().SelectMany(i =>
    Enumerable.Range(0, 100).Select(j => {
        using (var context = new QueueContext())
        {
            var queueItem = context.CollectQueueItem();
            return queueItem == null ? -1 : queueItem.OperationId;
        }
    })
);

var sw = Stopwatch.StartNew();
var results = ids.GroupBy(i => i).ToDictionary(g => g.Key, g => g.Count());
sw.Stop();

Console.WriteLine("Elapsed time: {0}", sw.Elapsed);
Console.WriteLine("Deadlocked: {0}", results.Where(r => r.Key == -1).Select(r => r.Value).SingleOrDefault());
Console.WriteLine("Duplicates: {0}", results.Count(r => r.Key > -1 && r.Value > 1));


//IsolationLevel = IsolationLevel.RepeatableRead:
//Elapsed time: 00:00:26.9198440
//Deadlocked: 634
//Duplicates: 0

//IsolationLevel = IsolationLevel.ReadUncommitted:
//Elapsed time: 00:00:00.8457558
//Deadlocked: 0
//Duplicates: 234

I ran the test a few times. Without the REPEATABLE READ isolation level, the same job is retrieved by different theads (seen in the 234 duplicates). With REPEATABLE READ, jobs are only retrieved once but performance suffers and there are 634 deadlocked transactions.

My question is: is there a way to get this behaviour in EF without the risk of deadlocks or conflicts? I know in real life there will be less contention as the processors won't be continually hitting the database, but nonetheless, is there a way to do this safely without having to handle the DbUpdateException? Can I get performance closer to that of the version without the REPEATABLE READ isolation level? Or are Deadlocks not that bad in fact and I can safely ignore the exception and let the processor retry after a few millis and accept that the performance will be OK if the not all the transactions are happening at the same time?

Thanks in advance!

È stato utile?

Soluzione

Id recommend a different approach.

a) sp_getapplock Use an SQL SP that provides an Application lock feature So you can have unique app behaviour, which might involve read from the DB or what ever else activity you need to control. It also lets you use EF in a normal way.

OR

b) Optimistic concurrency http://msdn.microsoft.com/en-us/data/jj592904

//Object Property:
public byte[] RowVersion { get; set; }
//Object Configuration:
Property(p => p.RowVersion).IsRowVersion().IsConcurrencyToken();

a logical extension to the APP lock or used just by itself is the rowversion concurrency field on DB. Allow the dirty read. BUT when someone goes to update the record As collected, it fails if someone beat them to it. Out of the box EF optimistic locking. You can delete "collected" job records later easily.

This might be better approach unless you expect high levels of concurrency.

Altri suggerimenti

As suggested by Phil, I used optimistic concurrency to ensure the job could not be processed more than once. I realised that rather than having to add a dedicated rowversion column I could use the IsLocked bit column as the ConcurrencyToken. Semantically, if this value has changed since we retrieved the row, the update should fail since only one processor should ever be able to lock it. I used the fluent API as below to configure this, although I could also have used the ConcurrencyCheck data annotation.

protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
    modelBuilder.Entity<QueueItem>()
        .Property(p => p.IsLocked)
        .IsConcurrencyToken();
}

I was then able to simple the CollectQueueItem method, losing the TransactionScope entirely and catching the more DbUpdateConcurrencyException.

public OperationQueueItem CollectQueueItem()
{
    try
    {
        var queueItem = this.QueueItems.FirstOrDefault(qi => !qi.IsLocked);

        if (queueItem != null)
        {
            queueItem.DateCollected = DateTime.UtcNow;
            queueItem.IsLocked = true;

            this.SaveChanges();
            return queueItem;
        }
    }
    catch (DbUpdateConcurrencyException) //someone else grabbed the job.
    { }

    return null;
}

I reran the tests, you can see it's a great compromise. No duplicates, nearly 100x faster than with REPEATABLE READ, and no DEADLOCKS so the DBAs won't be on my case. Awesome!

//Optimistic Concurrency:
//Elapsed time: 00:00:00.5065586
//Deadlocked: 624
//Duplicates: 0
Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top