Question

I have a stored procedure that uses a view to pull 6 averages. The SQL database is SQL Server 2000. When I run it in the Query analyzer, it takes roughly 9 seconds. What can I do to get better performance? Should I return the rows using LINQ and determine an average that way? Will it be faster?

Here's an example of my current sproc:

create procedure [TestAvg]
(
    @CustomerNumber int
)
as

select
(select AVG(OrderTime) from OrderDetails where ProductID = 12 and DateDiff(day, DateFulfilled, GetDate()) <= 7 and CustomerNumber = @CustomerNumber) as P12D7,
(select AVG(OrderTime) from OrderDetails where ProductID = 12 and DateDiff(day, DateFulfilled, GetDate()) <= 30 and CustomerNumber = @CustomerNumber) as P12D30,
(select AVG(OrderTime) from OrderDetails where ProductID = 12 and DateDiff(day, DateFulfilled, GetDate()) <= 90 and CustomerNumber = @CustomerNumber) as P12D90,
(select AVG(OrderTime) from OrderDetails where ProductID = 16 and DateDiff(day, DateFulfilled, GetDate()) <= 7 and CustomerNumber = @CustomerNumber) as P16D7,
(select AVG(OrderTime) from OrderDetails where ProductID = 16 and DateDiff(day, DateFulfilled, GetDate()) <= 30 and CustomerNumber = @CustomerNumber) as P16D30,
(select AVG(OrderTime) from OrderDetails where ProductID = 16 and DateDiff(day, DateFulfilled, GetDate()) <= 90 and CustomerNumber = @CustomerNumber) as P16D90

Also, let me clarify the view mentioned above. Since this is SQL Server 2000, I cannot use an indexed view because it does use a subquery. I suppose this can be rewritten to use joins. However, the last time we took a query and rewrote it to use joins, data was missing (because the subquery can return a null value which would omit the entire row).

Was it helpful?

Solution

I would recomend getting the data into a table var first, maybe 2 table vars, 1 for 12 and 1 for 16 ProductID. From these table vars, calculate the avgs as required, and then return tose from the sp.

DECLARE @OrderDetails12 TABLE(
        DateFulfilled DATETIME,
        OrderTime FLOAT
)

INSERT INTO @OrderDetails12
SELECT  DateFulfilled,
        OrderTime
FROM    OrderDetails
WHERE   ProductID = 12
AND     DateDiff(day, DateFulfilled, GetDate()) <= 90
and CustomerNumber = @CustomerNumber

DECLARE @OrderDetails16 TABLE(
        DateFulfilled DATETIME,
        OrderTime FLOAT
)

INSERT INTO @OrderDetails16
SELECT  DateFulfilled,
        OrderTime
FROM    OrderDetails
WHERE   ProductID = 16
AND     DateDiff(day, DateFulfilled, GetDate()) <= 90
and CustomerNumber = @CustomerNumber

Also, creating the correct indexes on the table, will help a lot.

OTHER TIPS

What would the amount of data leaving the database server be if it was unaggregated, and how long to do that operation? The difference in the size of the data will guide whether the calculation time on the server is outweighed by the transfer time and local calculation.

Also - look at that DATEDIFF usage and change it to be easier to make it optimizable (try DateFullfilled >= SomeCalculatedDate1 instead of DATEDIFF) - review your execution plan to ensure that it is able to use an index seek (best) or index scan (good) instead of a table_scan.

Also, ensure there is an index on CustomerNumber, ProduceID, DateFulfilled.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top