Question

I cannot find documentation anywhere on what would trigger this recompilation reason. We are investigating a sudden query performance drop and the only thing that I can think of is that the plan got recompiled for the parameterized query when executed against a small dataset which resulted in messing with row estimates. We noticed that when this process was running (after it started taking hours instead of seconds) it was hitting tempdb pretty hard. Statistics on consumed tables were not changed and the only other reason that makes sense in the list of recompilation reasons is #12 "Parameterized plan flushed".

The process in question was calling a view and filtering on a single INT column. This was done through Entity Framework. There is only 1 Entity Key on the class and it is the PK of the main table in the view. All records are unique.

I am curious if anyone can point me to any documentation out there that explains why a plan might be recompiled due to "Parameterized plan flushed".

Était-ce utile?

La solution

Query Plans are flushed from the cache for several reasons, including being aged out, being flushed due to memory pressure, flushed due to user action (DBCC FREEPROCCACHE etc), flushed due to restart and flushed due to explicit recompilation (OPTION (RECOMPILE or sp_recompile).

If you can see no evidence of forced recompilation or a manual flush, then it is most likely the plan was aged out or it was flushed due to memory pressure. From Docs: Plan Cache Internals:

Evicting plans from cache is based on their cost. For adhoc plans, the cost is considered to be zero, but it is increased by one every time the plan is reused. For other types of plans, the cost is a measure of the resources required to produce the plan. When one of these plans is reused, the cost is reset to the original cost. For non–adhoc queries, the cost is measured in units called ticks, with a maximum of 31. The cost is based on three factors: I/O, context switches, and memory. Each has its own maximum within the 31-tick total.

  • I/O: each I/O costs 1 tick, with a maximum of 19.
  • Context switches: 1 tick each with a maximum of 8.
  • Memory: 1 tick per 16 pages, with a maximum of 4.

When not under memory pressure, costs are not decreased until the total size of all plans cached reaches 50 percent of the buffer pool size. At that point, the next plan access will decrement the cost in ticks of all plans by 1. Once memory pressure is encountered, then SQL Server will start a dedicated resource monitor thread to decrement the cost of either plan objects in one particular cache (for local pressure) or all plan cache objects (for global pressure).

Take a look at this StackExchange answer which provides heaps of great links and information on plan reuse and caching. You'll have to put in place monitoring to catch the exact cause if this reoccurs, but you should also look at leveraging Query Store which will allow you to force a known good plan for this query and track regressions across your database as this may be affecting other queries as well.

Licencié sous: CC-BY-SA avec attribution
Non affilié à dba.stackexchange
scroll top