I have a quite a few servers deployed around the world. They are running Windows 2003 x64 with SQL Server 2005 x64 with 6 GB of RAM. The boxes do not have the best (or even an acceptable) configuration, because the guy that ordered them years ago didn't really know what he was doing.

The boxes are fairly consistently running out of memory, end up using the paging file and everything slows down. Typically the commit charge is 5.8GB and then when someone needs to do something intensive (e.g. run a report), that number goes through the roof.

I've been trying to get the powers that be order more memory, but I am getting massive opposition (e.g. make the software more performant, costs too much for all these servers, or prove that the box does not have enough memory, etc...).

Are there guidelines (or a formula) for how much RAM a box needs that I can present to non-techies, so that we can finally order more memory?


Solution 1:

Not really any way to easily tell because it is entirely dependent on your usage and the application. You're maxing out a database server...how big is the database? What are your transaction stats?

The real-world limitations are obvious in your scenario. You're running for awhile on 6 gig without problem, then it's swapping and thrashing.Thus 6 gig isn't enough.

If performance is enough that it impacts business, then your higher ups should be hearing enough complaints that it is prudent to up the memory. Figure out what your time costs and then figure out how much it will cost to "tune" the server and troubleshoot the tuning, when memory added to the server may very well solve the issue for the cost of memory and less than a half hour of downtime.

You'll not know the exact amount of memory you need until you actually deploy in your real-life usage and work from there.

That said, you might want to verify that your application is truly the bottleneck. Run the windows performance monitor to see your disk i/o statistics and network throughput. See what your fragmentation level is as well (Google is a good friend here). You could try auditing the code for obvious issues too where a query is being massively inefficient (Google again).

But again it all depends on how badly this is impacting the business. Is it worth more to invest in the tuning, or is it bad enough to throw hardware at it first and then try tuning it?

Solution 2:

An easy way to see if you need more RAM is to chart the Page Life Expectancy perfmon counter. This counter tells you how long SQL Server thinks that data will be kept in the buffer pool before it needs to make room for other data. You want this number as high as possible. With 6 Gigs of RAM installed (you should have SQL set to max out at probably 4 gigs) you'll probably only be keeping data in memory for a few minutes at most, when someone runs a large report you'll see this number tank down to a few seconds. The more RAM you have, the longer data can be kept in memory, and the less reading from the disks will need to be done.

For example, the systems I'm working with at the moment have 256 Gigs of RAM and we keep data in memory for about 12000 seconds or so.

Please don't ask for a target number to hit, you just want the number as high as possible. Without knowing a LOT more about your systems I can't give a good number to shoot for.