Which php5-fpm setting for high number of concurrent connections + nginx

Solution 1:

I think you are probably running far too many concurrent php processes, but it's hard to know without more info on where your resource bottlenecks are. I imagine that you are probably constrained by Disk IO and/or CPU, and that all your parallel PHP processes are competing for those and slowing each other down. At some point the overhead of process switching becomes a significant factor, and you get less throughput rather than more by having lots of processes running. You may also be getting into, or risking, situations where you run out of RAM and start swapping, which is very bad. Trust in nginx being able to queue requests up and keep a higher throughput of quicker requests going while doing less of them simultaneously.

I'd generally go for anything from 5 to 50 PHP processes, with both ends of that range being a little exceptional. More usually 10-15. With very high performance disk systems, and more than the usual 16 or so cores it might make sense to have more processes, but that's usually a false economy compared to having a larger number of cheaper servers. In my experience, unless you have a lot of really badly written code, there's usually little benefit to having more than about 15 php processes in parallel on a single server, and if there's a benefit it's likely to be stability rather than throughput, in the face of pathologically long-running requests piling up and leaving no spare processes available.

If you have multiple code bases with separate process pools, you might want a large number of processes, but you probably don't want more than 3 to 5 processes per pool.

You do want a lot of nginx worker connections handling static files. There's unlikely to be any improvement beyond 4096, and only in unusual circumstances would you see a difference between 1000 and 4000. (Unless you are primarily serving static files - that's quite a different scenario, but since you're talking about php processes on this box I don't imagine that's the case here).

I suspect your timeouts are too long. If there's nothing going on, drop the connection and get on to the next one.

Solution 2:

1) Memory - The first thing I'd look at is why your scripts need 50MB of memory if all they're doing is a simple search - I'm assuming you're not actually returned multiple megabytes of data per user, if you're serving hundreds of requests a second.

There is a bug in the MySQL connector library that makes PHP allocate the maximum size possible for any TEXT or BLOB, rather than just the actual amount of memory needed. This can be fixed by moving to the MySQLND library, with no code change required.

2) Your setting of pm.max_requests = 10000 is probably not a great choice. If each request is taking 2 seconds, then you're telling the process manager to restart each process after 20,000 seconds or almost 6 hours. That seems a very long time, and would be enough time for any memory leak to bring the process down. Putting it back to 500 would still only be a restart every 15 minutes, which would have no effect on performance but be likely to be more stable.

3) As Michael said, even if you are able to allow as many processes as you have users connecting, you still need to figure out where the bottleneck actually is. Even though you have multiple hundred PHP processes at once, if they're all just waiting for the SQL server to become available then they'll always just queue up to wait and eventually start timing out.

Unless you can remove the bottle-neck you'll need to either implement a rate-limiting mechanism to only allow as many queries as your server setup can handle, or a graceful degradation to reject requests that your server is currently unable to handle.