Apache reaching MaxClients and locking the server
HA! I finally found the problem myself. It's more related to programming than server admin, but I decided to put the answer here anyway because by searching google I found I'm not the only one with that kind of problem (and since Apache hangs, the first guess is that there's a problem with the server).
The issue is not with Apache, but with my Wordpress. More specifically with my theme. I'm using a theme called Lightworld and it supports adding an image to the blog header. To allow that, it checks the image size by using PHP's function getimagesize()
. Since this function was opening another http connection to the server to get the image, each request from ab
was creating another request internally from PHP. As I was using all my server available slots, these PHP requests were put in the queue, but Apache could never get to them because all it's processes were locked with the original request waiting for a slot to complete the PHP internal request.
Basically, PHP was putting my server into a deadlock state, and Apache would only start working normally after these connections timed out waiting for their "child" request.
After I removed this function from my theme, now I can ab
my server with as many concurrent connections as I want, and Apache is queueing them as expected.
What is happening here is that you have 25 threads able to accept connections, and you are sending 26 concurrent requests. That last request sits in the socket queue dependent on the size of your backlog.
The second problem is that whatever you're running that takes 2-3 seconds, is taking long enough to respond that the 25 concurrent connections are slowing it down. sleep(1) might work, but, something where you're doing file locking or table locking from mysql, each parallel request may be waiting on the prior to complete until they hit the 45 second timeout.
23mb sounds small for an apache process with mod_php and any modules loaded, so, I suspect you might be seeing those apache processes taking a bit more ram as your application is running. You can't really do math with MaxClients and memory like that... it will be somewhat close, but, you never know.
www-data 1495 0.1 0.9 56288 19996 ? S 15:48 0:01 /usr/sbin/apache2 -k start
www-data 1500 0.0 0.5 49684 12436 ? D 15:48 0:00 /usr/sbin/apache2 -k start
There's one machine, 56M and 49M processes.
another machine:
www-data 7767 0.1 0.1 213732 14840 ? S 14:55 0:08 /usr/sbin/apache2 -k start
www-data 8020 0.2 0.1 212424 13660 ? S 14:57 0:08 /usr/sbin/apache2 -k start
another machine:
www-data 28509 0.8 0.1 161720 10068 ? S 14:39 0:43 /usr/sbin/apache2 -k start
www-data 28511 0.8 0.1 161932 10344 ? S 14:39 0:43 /usr/sbin/apache2 -k start
So, memory use is very dependent on the task, which modules are loaded etc. On the last two, I believe we've disabled pdo & pdo_mysql as that application doesn't use them.
The real question is, what are you doing that is taking 3 seconds? In today's world, that is an eternity and considered a 'blocking' application. Apache won't normally die, but, will leave those threads in the backlog queue until it can service them or the waiting requests time out. I believe your application is probably causing apache to time out. Try it on a page containing just phpinfo(); and see if the results are the same.