Unix socket connection limit

This may appear like an already discussed/answered question. But I am specifically looking for info which I couldn't find anywhere answered clearly.

I have an Nginx + php-fpm setup which uses unix sockets to talk Nginx to backend php-fpm fastcgi processes. Recently I heard unix socket based connections are not as scalable as tcp-based connections. Not sure what is the limiting factor here, especially when I run everything from the same host.

I can increase the max file descriptor per system or an user(nginx). I can also increase this limit per nginx worker processes. Is max file descriptor the limiting factor?

I have very few websites configured in this setup and the max sockets (one per website). I use less than 50. Is there a max concurrent connections limit per socket when multiple nginx threads talking to multiple php-fpm instances in the backend under high load? Or what can actually limit a socket from allowing these connections if concurrency is very high?

Are there any other factors which can affect performance like locks, disk io performance etc?


I heard unix socket based connections are not as scalable as tcp-based connections

It's the other way round. Unix sockets are more scalable than TCP connection. Because when you use TCP you have to use the whole network stack. Even if you're on the same machine each packet has to be encapsulated and decapsulated. With Unix sockets you're doing process to process communication.

However, in many cases not using TCP packets is just a micro-optimization, the real bottleneck is elsewhere.