Question

I have a query like this on Sql Server 2008:

DECLARE @START_DATE DATETIME
SET @START_DATE = GETDATE()

SELECT * FROM MY_TABLE
WHERE TRANSACTION_DATE_TIME > DATEADD(MINUTE, -1440, @START_DATE)

In the select query that you see above, does SqlServer optimize the query in order to not calculate the DATEADD result again and again. Or is it my own responsibility to store the DATEADD result on a temp variable?

Was it helpful?

Solution

Surprisingly, I've found that using GETDATE() inline seems to be more efficient than performing this type of calculation beforehand.

DECLARE @sd1 DATETIME, @sd2 DATETIME;
SET @sd1 = GETDATE();

SELECT * FROM dbo.table
WHERE datetime_column > DATEADD(MINUTE, -1440, @sd1)

SELECT * FROM dbo.table
WHERE datetime_column > DATEADD(MINUTE, -1440, GETDATE())

SET @sd2 = DATEADD(MINUTE, -1440, @sd1);

SELECT * FROM dbo.table
WHERE datetime_column > @sd2;

If you check the plans on those, the middle query will always come out with the lowest cost (but not always the lowest elapsed time). Of course it may depend on your indexes and data, and you should not make any assumptions based on one query that the same pre-emptive optimization will work on another query. My instinct would be to not perform any calculations inline, and instead use the @sd2 variation above... but I've learned that I can't trust my instinct all the time and I can't make general assumptions based on behavior I experience in particular scenarios.

OTHER TIPS

SQL Server functions that are considered runtime constants are evaluated only once. GETDATE() is such a function, and DATEADD(..., constant, GETDATE()) is also a runtime constant. By leaving the actual function call inside the query you let the optimizer see what value will actually be used (as opposed to a variable value sniff) and then it can adjust its cardinality estimations accordingly, possibly coming up with a better plan.

Also read this: Troubleshooting Poor Query Performance: Constant Folding and Expression Evaluation During Cardinality Estimation.

@Martin Smith

You can run this query:

set nocount on;
declare @known int;
select @known = count(*) from sysobjects;
declare @cnt int = @known;
while @cnt = @known
    select @cnt = count(*) from sysobjects where getdate()=getdate()
select @cnt, @known;

In my case after 22 seconds it hit the boundary case and the loop exited. The inportant thing is that the loop exited with @cnt zero. One would expect that if the getdate() is evaluated per row then we would get a @cnt different from the correct @known count, but not 0. The fact that @cnt is zero when the loop exists shows each getdate() was evaluated once and then the same constant value was used for every row WHERE filtering (matching none). I am aware that one positive example does not prove a theorem, but I think the case is conclusive enough.

It will be executed just once. You can double check it by checking execution plan ("Compute Scalar"->Estimated Number of execution == 1)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top