Question

I am executing several long-running SQL queries as part of a reporting module. These queries are constructed dynamically at run-time. Depending on the user's input, they may be single or multi-statement, have one or more parameters and operate on one or more database tables - in other words, their form cannot be easily anticipated.

Currently, I am just executing these statements on an ordinary SqlConnection, i.e.

using (SqlConnection cn = new SqlConnection(ConnectionString)) {
    cn.Open();
    // command 1
    // command 2
    // ...
    // command N
}

Because these queries (really query batches) can take a while to execute, I am concerned about locks on tables holding up reads/writes for other users. It is not a problem if the data for these reports changes during the execution of the batch; the report queries should never take precedence over other operations on those tables, nor should they lock them.

For most long-running/multi-statement operations that involve modifying data, I would use transactions. The difference here is that these report queries are not modifying any data. Would I be correct in wrapping these report queries in an SqlTransaction in order to control their isolation level?

i.e:

using (SqlConnection cn = new SqlConnection(ConnectionString)) {
    cn.Open();

    using (SqlTransaction tr = cn.BeginTransaction(IsolationLevel.ReadUncommitted)) {
        // command 1
        // command 2
        // ...
        // command N

        tr.Commit();
    }
}

Would this achieve my desired outcome? Is it correct to commit a transaction, even though no data has been modified? Is there another approach?

Was it helpful?

Solution

Another approach might be to issue, against the connection:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

which achieves the same intent, without messing with a transaction. Or you could use the WITH(NOLOCK) hint on the tables in your query, which has the advantage of not changing the connection at all.

Importantly, note that (unusually): however it gets changed (transaction, transaction-scope, explicit SET, etc), the isolation level is not reset between uses of the same underlying connection when fetching it from the pool. This means that if your code changes the isolation level (directly or indirectly), then none of your code knows what the isolation level of a new connection is:

using(var conn = new SqlConnection(connectionString)) {
    conn.Open();
    // isolation level here could be **ANYTHING**; it could be the default
    // if it is a brand new connection, or could be whatever the last
    // connection was when it finished
}

Which makes the WITH(NOLOCK) quite tempting.

OTHER TIPS

I agree with Marc, but alternatively you could use the NOLOCK query hint on the affected tables. This would give you the ability to control it on a table by table level.

The problem with running any queries without taking shared locks is that you leave yourself open to "non-deterministic" results, and business decisions should not be made on this data.

A better approach may be to investigate either SNAPSHOT or READ_COMMITED_SNAPSHOT isolation levels. These give you protection against transactional anommolies without taking locks. The trade off is that they increase IO against TempDB. Either of these levels can be applied either to the session as Marc suggested or the table as I suggested.

Hope this helps

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top