How can I prove that the following T-SQL is bad for performance?
-
01-11-2019 - |
Question
I have recently inherited a codebase with a large amount of Stored Procedures. The system they are supporting is encountering numerous performance problems which I am looking in to.
A number of the Stored Procedures have a pattern like this:
- Create Temp table
Build up dynamic SQL query to insert a bunch of records, e.g.
DECLARE @sql VARCHAR(MAX) SET @sql = 'INSERT INTO @tempTable SELECT SomeColumn, SomeColumn2, SomeColumn3, etc FROM MyTable' IF @someParam = [SomeValue] SET @sql = @sql + 'WHERE SomeColumn = [SomeValue]'; IF @someOtherParam = [SomeOtherValue] SET @sql = @sql + 'WHERE SomeOtherColum = [SomeOtherValue]';
Execute this dynamic sql
EXEC(@sql);
Select from the temp table and bring in a bunch of additional information to return to the client.
SELECT ... FROM @tempTable INNER JOIN ...
My immediate thoughts are:
- There is dynamic SQL, so no cache plans, meaning plans generated every time.
- There is an
INSERT
SELECT
pattern, so table locking is more likely to be an issue.
I have re-written some of the Stored Procedures in this way instead:
SELECT
...
FROM
MyTable
INNER JOIN ...
WHERE
(
@someParam != SomeValue
OR
SomeColumn = SomeValue
)
AND
(
@someOtherParam != SomeOtherValue
OR
SomeOtherColumn = SomeOtherValue
)
From comparing execution plans and client statistics in SQL Management Studio, I have not sped the Stored Procedures up so I am apprehensive about suggesting wholesale re-writes of all Stored Procedures.
I am trying to set up some profiling of a live customer scenario, but as yet have been unable to prove my thoughts.
Can anyone offer any confirmation of the theory behind my thoughts, or any better ways of proving my suspicions?
The problem is I have read that dynamic SQL is not always a closed case - i.e. it depends on how it is used. My understanding of locking also falls down at the fact that nowhere can I get 100% confirmation of how this type of INSERT
SELECT
will lock tables.
No correct solution