Apache performance degrades dramatically above ~256 simultaneous requests
What I would do in this situation is run
strace -f -p <PID> -tt -T -s 500 -o trace.txt
on one of your Apache processes during the ab test until you capture one of the slow responses. Then have a look through trace.txt
.
The -tt
and -T
options give you timestamps of the start and duration of each system call to help identify the slow ones.
You might find a single slow system call such as open()
or stat()
or you might find a quick call with (possibly multiple) poll()
calls directly after it. If you find one that's operating on a file or network connection (quite likely) look backwards through the trace until you find that file or connection handle. The earlier calls on that same handle should give you an idea of what the poll()
was waiting for.
Good idea looking at the -c
option. Did you ensure that the Apache child you were tracing served at least one of the slow requests during that time? (I'm not even sure how you would do this apart from running strace
simultaneously on all children.)
Unfortunately, strace
doesn't give us the complete picture of what a running program is doing. It only tracks system calls. A lot can happen inside a program that doesn't require asking the kernel for anything. To figure out if this is happening, you can look at the timestamps of the start of each system call. If you see significant gaps, that's where the time is going. This isn't easily greppable and there are always small gaps between the system calls anyway.
Since you said the CPU usage stays low, it's probably not excessive things happening in between system calls but it's worth checking.
Looking more closely at the output from ab
:
The sudden jump in the response times (looks like there are no response times anywhere between 150ms and 3000ms) suggests that there is a specific timeout happening somewhere that gets triggered above around 256 simultaneous connections. A smoother degradation would be expected if you were running out of RAM or CPU cycles normal IO.
Secondly, the slow ab
response shows that the 3000ms were spent in the connect
phase. Nearly all of them took around 30ms but 5% took 3000ms. This suggests that the network is the problem.
Where are you running ab
from? Can you try it from the same network as the Apache machine?
For more data, try running tcpdump
at both ends of the connection (preferably with ntp
running at both ends so you can sync the two captures up.) and look for any tcp retransmissions. Wireshark is particularly good for analysing the dumps because it highlights tcp retransmissions in a different colour, making them easy to find.
It might also be worth looking at the logs of any network devices you have access to. I recently ran into a problem with one of our firewalls where it could handle the bandwidth in terms of kb/s but it couldn't handle the number of packets per second it was receiving. It topped out at 140,000 packets per second. Some quick maths on your ab
run leads me to believe you would have been seeing around 13,000 packets per second (ignoring the 5% of slow requests). Maybe this is the bottleneck you have reached. The fact that this happens around 256 might be purely a coincidence.