Question

This is a question I asked on another forum which received some decent answers, but I wanted to see if anyone here has more insight.

The problem is that you have one of your pages in a web application timing out when it gets to a stored procedure call, so you use Sql Profiler, or your application trace logs, to find the query and you paste it into management studio to figure our why it's running slow. But you run it from there and it just blazes along, returning in less than a second each time.

My particular case was using ASP.NET 2.0 and Sql Server 2005, but I think the problem could apply to any RDBMS system.

Was it helpful?

Solution

This is what I've learned so far from my research.

.NET sends in connection settings that are not the same as what you get when you log in to management studio. Here is what you see if you sniff the connection with Sql Profiler:

-- network protocol: TCP/IP  
set quoted_identifier off  
set arithabort off  
set numeric_roundabort off  
set ansi_warnings on  
set ansi_padding on  
set ansi_nulls off  
set concat_null_yields_null on  
set cursor_close_on_commit off  
set implicit_transactions off  
set language us_english  
set dateformat mdy  
set datefirst 7  
set transaction isolation level read committed  

I am now pasting those setting in above every query that I run when logged in to sql server, to make sure the settings are the same.

For this case, I tried each setting individually, after disconnecting and reconnecting, and found that changing arithabort from off to on reduced the problem query from 90 seconds to 1 second.

The most probable explanation is related to parameter sniffing, which is a technique Sql Server uses to pick what it thinks is the most effective query plan. When you change one of the connection settings, the query optimizer might choose a different plan, and in this case, it apparently chose a bad one.

But I'm not totally convinced of this. I have tried comparing the actual query plans after changing this setting and I have yet to see the diff show any changes.

Is there something else about the arithabort setting that might cause a query to run slowly in some cases?

The solution seemed simple: Just put set arithabort on into the top of the stored procedure. But this could lead to the opposite problem: change the query parameters and suddenly it runs faster with 'off' than 'on'.

For the time being I am running the procedure 'with recompile' to make sure the plan gets regenerated each time. It's Ok for this particular report, since it takes maybe a second to recompile, and this isn't too noticeable on a report that takes 1-10 seconds to return (it's a monster).

But it's not an option for other queries that run much more frequently and need to return as quickly as possible, in just a few milliseconds.

OTHER TIPS

I've had similar problems. Try setting the with "WITH RECOMPILE" option on the sproc create to force the system to recompute the execution plan each time it is called. Sometimes the Query processor gets confused in complex stored procedures with lots of branching or case statements and just pulls a really sub-optimal execution plan. If that seems to "fix" the problem, you will probably need to verify statistics are up to date and/or break down the sproc.

You can also confirm this by profiling the sproc. When you execute it from SQL Managment Studio, how does the IO compare to when you profile it from the ASP.NET application. If they very a lot, it just re-enforces that its pulling a bad execution plan.

Have you turned on ASP.NET tracing yet? I've had an instance where it wasn't the SQL stored procedure itself that was the problem, it was the fact that the procedure returned 5000 rows and the app was attempting to create databound ListItems with those 5000 items that was causing the problem.

You might look into the execution times between the web app functions as well through the trace to help track things down.

test this out on a staging box first, change it on a server level for sql server

declare @option int

set @option = @@options | 64

exec sp_configure 'user options', @option

RECONFIGURE

Same problem I had with SQL reporting services. Try to check type of variables, I was sending different type of variable to SQL like sending varchar in place where it should be integer, or something like that. After I synchronized the types of variables in Reporting Service and in stored procedure on SQL, I solved the problem.

You could try using the sp_who2 command to see what process in question is doing. This will show you if it's blocked by another process, or using up an excessive amount of cpu and/or io time.

We had the same issue and here's what we found out.

our database log size was being kept at the default (814 MB) and auto growth was 10%. On the server, maximum server memory was kept at the default setting as well (2147483647 MB).

When our log got full and needed to grow, it used all the memory from the server and there's nothing left for code to be run so it timed out. What we ended up doing was set database log file initial size to 1 MB and maximum server memory to 2048 MB. This instantly fixed our problem. Of course, you can change these two properties to fit your need but this is an idea for someone running into the timing out issue when executing a stored procedure via code but it runs super fast in SSMS and the solutions above do not help.

Try changing the SelectCommand timeout value:

DataAdapter.SelectCommand.CommandTimeout = 120;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top