How to handle 1M websocket connections (Nginx/HAProxy/Amazon/Google) [duplicate]

What nginx or haproxy setup is suggested for target 100K concurrent websocket connections? What I think, a single nginx will not be able to take such traffic as well as concurrent connections. How should traffic to nginx/haproxy be split (DNS lvl or any Amazon/Google option available)? How much concurrent websockets a single nginx be able to handle?

Tried gathering relevant information from google search and SO posts.


Solution 1:

There are people running chat servers behind haproxy load balancers at even higher loads. The highest load that was reported to me in private e-mail (with the copy of the stats page) was at around 300k connections per process (hence 600k sockets). Note that under Linux by default a process is limited to 1M file descriptors (hence 500k end-to-end connections), but that can be tweaked in /proc.

The most important thing to consider at such loads is the amount of RAM you need. The kernel-side socket buffers will always require at least 4kB per direction per side, hence 16kB minimum per end-to-end connection. HAProxy 1.5 and lower will have two buffers per connection (eg: 4kB buffers are enough for websocket). 1.6 can run without those buffers and only keep them allocated for the rare connections with data. So at least that's 16 GB of RAM per million of connection, or around 24 GB with older versions. It may be worth spreading this over multiple processes on SMP machines to reduce the latency. Keep in mind that in order to simply establish 1M connections, it can take 10 seconds at 100k conns/s. All these connections induce some work for a few bytes each, and dealing with 1M active connections will definitely induce an important work and a high load on the system.