Question

I just finished reading an excellent article on isolation levels here. Our company will soon start development on a rewrite and expansion of our current product. My desire is to have an OLTP database and a separate, more denormalized, reporting database. Assuming we're somewhat disciplined and most of our ad-hoc and reporting type queries actually go to the reporting database, does it sound appropriate that our OLTP database have a default isolation level of Read Committed (we won't need a more stringent isolation level for OLTP) and our reporting database be Snapshot Isolation (probably RCSI)?

My thinking is that if our OLTP database is actually a true OLTP database and not serving double-duty as a reporting DB, we won't need snapshot isolation, and the associated overhead it entails. But snapshot isolation would be desirable on the reporting database so that readers are not blocked by the constant flow of data coming in, and reading the last saved version of a row would be acceptable.

Was it helpful?

Solution

Just to add to the other answer.

SQL Server supports two different flavors of READ COMMITTED, legacy locking READ COMMITTED and READ COMMITTED SNAPSHOT. If you've ever built and supported a high-volume OLTP application on locking READ COMMITTED you know why RCSI was introduced (in addition to making it easy to port Oracle applications to SQL Server).

Locking READ COMMITTED is tricky to get concurrency and is prone to deadlocking, as readers block writers and writers block readers. Deadlocks are a kind of bug in your application, but they're famously hard to find in testing. So you have a chance to make a choice that reduces the number of hard-to-find-in-testing bugs. That's worth a lot. RCSI also increases the concurrency of your application, enabling you to scale up to use multiple cores more easily.

So, RCSI increases the scalability and reliability of your application, at the cost of a little extra bookkeeping for UPDATES and DELETES (and INSERTS if you have a trigger), and an extra 14 bytes per row.

RCSI is simpler to program to over all, the main thing is that you sometimes need to opt in to a locking read using the UPDLOCK table hint when you want to read a row and immediately update it and need to ensure that no other session does the same thing concurrently.

OTHER TIPS

I wouldn't assume that snapshot or READ COMMITTED SNAPSHOT necessarily will increase your overhead that might lead to performance degradation -- the reduced wait times may increase throughput.

The danger with RCS and SNAPSHOT has more to do with how the app is coded -- if your programmers assume the blocking semantics of READ COMMITTED when the queries execute in Snapshot mode, you could be in for trouble. For example

UPDATE foo SET bar = 1 + (SELECT MAX(bar) from foo);

will likely cause problems in high throughput situations because multiple readers will get the same MAX(bar) value at the same time.

However, you will likely still need some form of non-OLAP operational reports and queries, and snapshot mode can be a great boon for performance there.

My point is that you can design and develop your OLTP system for RCS mode, and should consider it. After all, that's pretty much how most OLTP systems in Oracle are written.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top