Sane value for net.ipv4.tcp_max_syn_backlog in sysctl.conf
I'm tuning sysctl.conf
.
According to an optimization guide on linode's website, the following is a sane value to set in sysctl.conf
:
net.ipv4.tcp_max_syn_backlog = 3240000
However, the same value in an archlinux optimization guide is:
net.ipv4.tcp_max_syn_backlog = 65536
Lastly, on another optimization blog (that is old but still SEOs quite high on Google), the value is touted to be:
net.ipv4.tcp_max_syn_backlog = 4096
All these ball parks are wildly different. What's the reasoning behind setting this value to a high number (vs a low number)? Which one should is the actual 'sane' value to start with?
Solution 1:
It mainly depends on how much traffic you're running through you server(s). There are several important questions:
- How many concurrent connections do you expect to handle?
- What is the average response time? (if generating response takes e.g. 10-50s seconds you might easily run out of resources, even without DDoS attack).
- Which server do you use?
nginx
,haproxy
,varnish
You should be monitoring:
netstat -s | grep "SYNs to LISTEN"
which is a symptom that your server is dropping packets (because e.g. the backlog queue is full).
Netstat statistics are exported to /proc/net/netstat
where the stat is called ListenDrops
. It might be easier to parse with a script, or use something like:
cat /proc/net/netstat | awk '(f==0) { i=1; while ( i<=NF) {n[i] = $i; i++ }; f=1; next} \
(f==1){ i=2; while ( i<=NF){ printf "%s = %d\n", n[i], $i; i++}; f=0}'
to get human readable names of stats. You should be able to collect this data using e.g. telegraf, collectd or prometheus.
Kernel tuning
net.ipv4.tcp_max_syn_backlog
- How many half-open connections for which the client has not yet sent an ACK response can be kept in the queue (source).
net.core.somaxconn
The maximum number of connections that can be queued for acceptance
net.core.netdev_max_backlog
The maximum number of packets in the receive queue that passed through the network interface and are waiting to be processed by the kernel.
These settings are tightly connected with number of opened files (as in Linux each new connection will open 2 file handles). You can check your limits using:
cat /proc/sys/fs/file-nr
8160 0 3270712
Which means that the server has 8160
opened files out of 3270712
.