IIS Memory management & thresholds with potential leaks

We're running some X64 Win Server 2012 webservers (So IIS 8).

We're noticing that the free memory on the boxes is constantly sitting at 5-10% free. We actually run quite a lot of applications on these boxes (13 sites, 80 apps over 13 app pools). Most of the code is duplicated for each site as they correspond to a different database and physical site, but the application is the same.

We're pretty confident that we have a memory leak in the application as the memory just keeps growing, so we're looking at that straight off the bat, but something I'm confused about is the memory allocation and management of IIS. I'm wondering if it's any different for IIS 8 or x64 servers (we just moved to x64 recently).

So basically each of our webservers had 6GB of memory and would sit at 5-10% free memory. The top application that we're sure is leaking was using a whopping 1.2gb of memory. The next was about 800MB and the rest averaged to about 400-500MB (All these values are private memory, as seen in task manager) As I said the code is duplicated so if there is a leak in one site it will be in all of them, it's just that different physical locations can have some features switched on or off, which explains the big discrepancy.

While we work out the problem we decided to just up the memory so we don't run into issues. So last night I brought down each server and doubled the memory to 12GB. This morning the 3 servers are sitting at 77%, 80% and 82% used memory. All of the processes have increased their memory usage by 1.5-2 times.

So now I'm confused. Is it really a memory leak? Or is there some sort of memory pre-allocation? Or does it never release memory unless another process requests it a la SQL Server or what?

What was keeping the memory levels in check at 6GB if they suddenly grow so huge when the memory doubles? Are there threshholds that are set? Does IIS/ASP simply not garbage collect until memory is low or what?

Any answers are appreciated.


Solution 1:

Don't worry! You might just be over-caching!

The default Output Caching configuration in IIS enables both kernel-mode and user-mode caching.

Kernel mode caching is managed by the native HTTP driver (aka http.sys), and is lightning fast, but can only serve content that is "public", since it needs to be able to respond to a cache hit before the request reaches the web application.
Unfortunately, this means that a number of request types cannot be cached in kernel mode, including authorized sessions (such as visits to websites requiring authentication).

The type of setup you describe sounds like some sort of multi-tenancy client service, leading me to believe that kernel-mode caching is out of the picture.

User mode caching, on the other hand, is managed at the application level, and cached objects are stored in the serving worker process' memory set. The total cache size is governed by the an attribute called maxCacheSize on the system.webServer/Caching configuration element.

By default, the maxCacheSize attribute is set to 0 which roughly translates to Let IIS allocate as much memory as is currently permissible.

If you have a lot of consecutive small (<256kb) hits, but a low uri cache hit ratio, IIS will surely eat all the memory you provide.

You can easily test whether this is true or not, by either lowering the maxCacheSize value or disabling output caching on the server entirely.


If you're still convinced that you have a memory problem with your application, fire up Performance Monitor and take a look at the "ASP.NET Applications" Performance Counter object.
Select the GC counters and see how garbage collection is dispersed.

Gen0 collections should represent almost all disposals, whereas a large number of Gen1 and Gen2 collections might indicate a problem with longer-than-necessary object lifetimes - which is a common symptom of failed memory management