Scaling beyond 65k open files (TCP connections)
I have a requirement for my Ubuntu 14.04.3 LTS server to have >65k clients connected to it via TCP. Although I have memory & cpu to spare I'm unable to get more than 65k clients to connect. I suspected it's an open file limit issue, however I have followed the many existing solutions on stackoverflow to change the limit on the number of open files but I'm still hitting the limit. I made the following changes..
/etc/security/limits.conf
* soft nofile 500000
* hard nofile 500000
root soft nofile 500000
root hard nofile 500000
/etc/pam.d/common-session
session required pam_limits.so
/etc/pam.d/common-session-noninteractive
session required pam_limits.so
/etc/sysctl.conf
fs.file-max = 500000
When I check ulimit it looks to be correctly updated, as you can see below...
:~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 30038
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 500000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 30038
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
:~$ cat /proc/1739/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size unlimited unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 30038 30038 processes
Max open files 500000 500000 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 30038 30038 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Unfortunately there still seems to be some limit somewhere preventing additional clients being added as the server hits 65,589 open files and refuses to open additional files (tcp connections).
:~$ sudo ls /proc/1739/fd | wc -l
65589
Is there some other setting in ubuntu / linux that needs to be changed?
Update
vm.max_map_count
seems to have done the trick by setting sudo sysctl vm.max_map_count=16777216
and adding the vm.map_map_count entry to /etc/sysctl.conf
.
As you can see...
:~$ sudo ls /proc/2391/fd | wc -l
73609
:~$ netstat -an | grep ESTABLISHED | wc -l
73561
I'll have to be careful of course to set the number of open files to a limit that corresponds to the desired amount of memory utilization. Off the link @sysadmin1138 provided there was another page recommending a rough guide of 16K of memory per map (open tcp socket), which seems like a good place to start. Although I am seeing a different symptom now, where the number of open files / sockets fluctuates when the server attempts to publish a message to the connected clients. So that will require some further investigation.
Per Max number of socket on Linux, the sysctl variable vm.max_map_count
may be of use here.
You may be bumping into the limitation of 16 bit ports (65536).