Heavy Apache memory usage

Recently I've noticed that httpd processes started to consume massive amounts of memory - after some time pretty much using almost all of the 2GB of RAM the server has and I don't have any memory left for other stuff. Here's what top tells me:

26409 apache 15 0 276m 152m 28m S 0 7.4 0:59.12 httpd
26408 apache 15 0 278m 151m 28m S 0 7.4 1:03.80 httpd
26410 apache 15 0 277m 149m 26m S 0 7.3 0:57.22 httpd
26405 apache 15 0 276m 148m 25m S 0 7.3 0:59.20 httpd
26411 apache 16 0 276m 146m 23m S 0 7.2 1:09.18 httpd
17549 apache 15 0 276m 144m 23m S 0 7.0 0:36.34 httpd
22095 apache 15 0 276m 136m 14m S 0 6.6 0:30.56 httpd

It seems to me that each httpd process does not free the memory after handling a request. So they all sit at ~270MB which is BAD. Is there a way for me to know where all the memory goes and why it stays that way? I haven't done any server tweaking lately, so I'm sure it's not me who messed something up (haven't had the problem before).

The server is used to serve PHP apps.

EDIT: Apache is configured with prefork module and MaxRequestsPerChild is set to 4000.


Solution 1:

The quick solution is to use MaxRequestsPerChild (number) (for example, 10000) to have the Apache restart each worker after that many requests. That will discard the memory used when it's restarted.

The 276m number isn't how much each process is using though. An explanation of the values shown in 'top' is helpful here:

VIRT: Virtual Image (kb) The total amount of virtual memory used by the task. It includes all code, data and shared libraries plus pages that have been swapped out. (if you are using APC, the memory space used by it will also be included in this value)

RES: Resident size (kb) The non-swapped physical memory a task has used.

SHR: Shared Mem size (kb) The amount of shared memory used by a task. It simply reflects memory that could be potentially shared with other processes.

In 'top', you can add a 'data' column Data: Data+Stack size (kb) The amount of physical memory devoted to other than executable code, also known as the ’data resident set’ size or DRS.

That 'Data' value more closely matches the unique memory being used by that particular process, which is probably not that much. Adding up those 276M and getting a number near to 2GB means that you're double-counting a lot of things.

Solution 2:

Please post a full screenshot from top rather than just the httpd processes (filter it by the apache user if you want).

The Mem and Swap sections at the top show a lot of useful information here - for example the following is from one of my systems:

Mem:  16415160k total, 16360604k used,    54556k free,   173948k buffers
Swap: 16779768k total,    28700k used, 16751068k free,  5006768k cached

Looks like all the memory's in use doesn't it - all 16Gb!?!

Actually that's a good thing because as you can see the system's not using (practically) any swap space, and 5Gb of memory is in use as cache.

What happens on Linux is that if there's any free memory available then the kernel will allocate that for filesystem cache and I/O buffering. This allows the system to retrieve filesystem data from memory rather than having to read from disk each time. If a process needs memory then the cache will shrink a wee bit and the memory allocated to the process.