Question

Web server has 8 GB of memory and hosts multiple sites. They all have same framework and should take 200 - 350 MB in memory. After small t-sql update site memory use rose to 1.9 GB.

At first I thought multiple crawlers like Google bot hit the site and a lot of the content gets cached. I created page for viewing cache. After that I found by default IIS memory for sites is 50% of available memory and when that hits 99% it can not be the problem!

I took a full memory dump of that web application and looked what is happening:

enter image description here

It seems that I have 48012 DataTable instances (struct size 512 bytes). Data used by them is 917 MB.

Before you answer do read "Should I Dispose() DataSet and DataTable?" and it's top answer.

Can someone explain what is happening?

edit 1:

Large chunk of the memory used sits in Generation 2.

enter image description here

Generation 0. This is the youngest generation and contains short-lived objects. An example of a short-lived object is a temporary variable. Garbage collection occurs most frequently in this generation. Newly allocated objects form a new generation of objects and are implicitly generation 0 collections, unless they are large objects, in which case they go on the large object heap in a generation 2 collection. Most objects are reclaimed for garbage collection in generation 0 and do not survive to the next generation.

Generation 1. This generation contains short-lived objects and serves as a buffer between short-lived objects and long-lived objects.

Generation 2. This generation contains long-lived objects. An example of a long-lived object is an object in a server application that contains static data that is live for the duration of the process.


Reference: link

Edit 2:

After digging in the memory dumb, I found that most of the data is trace data. Then I found out that someone had turned on Trace'ing in web.conf with verbose attribute.

<tracing>
    <traceFailedRequests>
        <add path="*">
            <traceAreas>
                <add provider="ASP" verbosity="Verbose" />
                <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" />
                <add provider="ISAPI Extension" verbosity="Verbose" />
                <add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module,FastCGI,Rewrite" verbosity="Verbose" />
            </traceAreas>
            <failureDefinitions statusCodes="404" />
        </add>
    </traceFailedRequests>
</tracing>

Usually trace data could be viewed in "*/trace.axd" page and contains ~20 instances, but in my case it was set to 4000.

Was it helpful?

Solution

There was a link on "*/trace.axd" with what I could clear current trace. Memory use dropped to 600 MB.

  <trace 
      enabled="true" 
      pageOutput="false" 
      requestLimit="4000" 
      localOnly="false" />

It seems to be clear that problem was that trace info got to generation 2 heap.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top