I am monitoring the memory object in Windows 2k8, and am tracking the Page Faults/sec counter. Is there any threshold to determining what is an excessive amount of page faults? Or should I be more concerned with a sustained, high, amount of page faults?

Is there a better way to look at page faults?


This is a good question because getting a read on memory issues for performance monitoring is difficult.

First off, when looking at Page Faults/sec keep in mind that this includes soft faults, hard faults and file cache faults. For the most part, you can ignore soft faults (i.e. paging between memory locations) and cache faults (reading files in to memory) as they have limited performance impact in most situations.

The real counter for memory shortages will be hard faults which can be found under Memory: Page Reads/sec. Hard faults mean process execution is interrupted so memory can be read from disk (usually it means hitting the page file). I would consider any sustained number of hard faults to be indicative of a memory shortage.

As you go further down the rabbit hole, you can also compare disk queue lengths to hard faults to see if the disk reads are further affecting disk performance. To get a picture here, look at Physical Disk: Avg. Disk Queue Length. If this number is greater than the number of spindles in your array, you have a problem. However, if this number only spikes during hard page faults, you have a problem with memory capacity and not disk performance.


Page faults /sec is a relative counter so you need to compare it to memory utilization, and disk i/o among other things. Even a sustained high amount of page faults might not be indicitive of a performance problem (in and of itself) at it simply means that the page requested wasn't in memory. Take a look at this overview of the PAL tool for basic windows performance analysis.