Question

I have a SQL Server table with a nvarchar(50) column. This column must be unique and can't be the PK, since another column is the PK. So I have set a non-clustered unique index on this column.

During a large amount of insert statements in a serializable transaction, I want to perform select queries based on this column only, in different transaction. But these inserts seem to lock the table. If I change the datatype of the unique column to bigint for example, no locking occurs.

Why isn't nvarchar working, whereas bigint does? How can I achieve the same, using nvarchar(50) as the datatype?

Was it helpful?

Solution

After all, mystery solved! Rather stupid situation I guess..

The problem was in the select statement. The where clause was missing the quotes, but due to a devilish coincidence of the existing data were only numbers, the select wasn't failing but just wasn't executing until the inserts committed. When the first alphanumeric data were inserted, the select statement begun failing with 'Error converting data type nvarchar to numeric'

e.g Instead of

SELECT [my_nvarchar_column]  
FROM [dbo].[my_table]  
WHERE [my_nvarchar_column] = '12345'

the select statement was

SELECT [my_nvarchar_column]  
FROM [dbo].[my_table]  
WHERE [my_nvarchar_column] = 12345

I guess a silent cast was performed, the unique index was not being used which resulted to the block. Fixed the statement and everything works as expected now.

Thanks everyone for their help, and sorry for the rather stupid issue!

OTHER TIPS

First, you can change the PK to be a non-clustered index, then you you could create a clustered index on this field. Of course, that may be a bad idea based on your usage, or just simply not help.

You might have a use case for a covering index, see previous question re: covering index

You might be able to change your "other queries" to non-blocking by changing the isolation level of those queries.

It is relatively uncommon for it to be a necessity to insert a large number of rows in of a single transaction. You may be able to simply not use a transaction, or split up into a smaller set of transactions to avoid locking large sections of the table. E.g., you can insert the records into a pending table (that is not otherwise used in normal activity) in a transaction, then migrate these records in smaller transactions to the main table if real-time posting to the main table is not required.

ADDED

Perhaps the most obvious question. Are you sure you have to use a serializable transaction to insert a large number of records? These are relatively rarely necessary outside of financial transactions, and impose a high concurrency cost compared to the other isolation levels?

ADDED

Based on your comment about "all or none", you are describing atomicity, not serializability. I.e., you might be able to use a different isolation level for your large insert transaction, and still get atomicity.

Second thing, I notice you specify a large amount of insert statements. This just sounds like you should be able to push these inserts into a pending/staging table, then perform a single insert or batches of inserts from the staging table into the production table. Yes, it is more work, but you may just have an existing problem that requires the extra effort.

You may want to add the NOLOCK hint (a.k.a. READUNCOMMITTED) to your query. It will allow to to perform a "dirty read" of the data that has been already inserted.

e.g.

SELECT [my_nvarchar_column] FROM [dbo].[locked_table] WITH (NOLOCK)

Take a look at a better explanation here:

http://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/

And the READUNCOMMITTED section here:

http://technet.microsoft.com/en-us/library/ms187373.aspx

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top