Question

I have a peculiar situation. I have tables that are constantly accessed by different parts of our code and thousands of clients and so we employed the use of transactions when doing simple updates and inserts on our tables. Problem is we keep getting deadlock errors. Anyone have any idea how I can alleviate this problem?

Was it helpful?

Solution

Deadlocks can arise for many reasons and combinations thereof:

  • Poor schema design

  • Incorrect indexes for your query workload

  • Poorly written TSQL

  • Aggressive transaction isolation levels and/or long running open transactions

  • Poor application access patterns

  • Low spec or incorrectly configured hardware

All of these are common.

I suggest you read

OTHER TIPS

This problem isn't too peculiar -- it's typical when developers don't know much about how locking works, and just think of transactions as 'black boxes' and expect their solutions to scale.

Mitch is right in the comments about paying someone who is an expert -- this is a problem that's too big for any solution on SO. You are going to need to be armed with traces of queries causing deadlocks and you're going to have to analyze everything from your indexes to your table design, to your transaction isolation levels, to your query patterns.

I suggest starting with SQL Server Profiler and setting up a trace which will generate a deadlock graph. That will at least identify your problem queries and the resources which are deadlocking. Set up another trace looking for slow queries (> say, 100ms) and speed those up, too. The longer your queries run, the higher the probability of lock contention.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top