質問

I have a scenario where a user action on screen results in new records getting created in about 50 different tables, real-time. The design of the use case is such that the new records that are created as a result of a user action - is required immediately for the user to make changes. So no possibility of offline or delayed creation.

Having said that, the obvious problem is - the insertion statements (along with some additional manipulation statements) are inside a transaction, which makes it a really lengthy transaction. This runs for about 30 seconds and often results in timeout or blocks other queries.

Transaction is required for atomicity. Is there a better way I can split the transaction and still retain the consistency? Or any other ways to improve upon the current situation?

役に立ちましたか?

解決

insert queries are waiting on other (mostly select) queries that are running in parallel at that moment

You should consider using a row versioned based isolation level, aka. SNAPSHOT, because under row-versioned based isolation levels the reads don't block writes and writes don't block reads. I would start by enabling READ_COMMITTED_SNAPSHOT and test with that:

ALTER DATABASE [...] SET READ_COMMITTED_SNAPSHOT ON;

I recommend reading the article linked for an explanation of implications and trade-offs implied by row-versioning.

他のヒント

Based on the comments exchange, I believe that you have to look at both the insert transaction and on the concurrent queries at the same time. You want to accommodate their load without losing transactional integrity. The available optimization techniques include:

  1. Adding access indexes whenever you notice slow constructs (for example, nested loops) over large data sets in execution plans of frequently seen or slowly executing queries.

  2. Adding covering indexes. These indexes contain additional columns in addition to lookup columns and they make it possible for a particular query to avoid a trip to a table at all. This is especially efficient when the table is wide and the covering index narrow, but it may also be used to avoid locking issues between UPDATEs and SELECTs on different columns of the same rows.

  3. Denormalization. For example, switching some of the queries to access indexed views as opposed the physical tables, or secondary tables fed with triggers upon updates to the primary tables. These are costly and double edged techniques and should only be considered for resolving the proven top bottlenecks.

Make only those changes where the speed-up measured is very large as none of these techniques come for free in terms of performance. Never optimize without doing performance measurements at each step.

This following is trivial, but let's mention it for completeness - keep your statistics up to date (ANALYZE, UPDATE STATISTICS,... as per you database engine), both while you analyze the execution plans, and in production use.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top