Linux iptables / conntrack performance issue

I have a test-setup in the lab with 4 machines:

  • 2 old P4 machines (t1, t2)
  • 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t3) Intel e1000
  • 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t4) Intel e1000

to test linux firewall performance since we got bitten by a number of syn-flood attacks in the last months. All machines run Ubuntu 12.04 64bit. t1, t2, t3 are interconnected through an 1GB/s switch, t4 is connected to t3 via an extra interface. So t3 simulates the firewall, t4 is the target, t1,t2 play the attackers generating a packetstorm thorugh (192.168.4.199 is t4):

hping3 -I eth1 --rand-source --syn --flood 192.168.4.199 -p 80

t4 drops all incoming packets to avoid confusion with gateways, performance issues of t4 etc. I watch the packet stats in iptraf. I have configured the firewall (t3) as follows:

  • stock 3.2.0-31-generic #50-Ubuntu SMP kernel
  • rhash_entries=33554432 as kernel parameter
  • sysctl as follows:

    net.ipv4.ip_forward = 1
    net.ipv4.route.gc_elasticity = 2
    net.ipv4.route.gc_timeout = 1
    net.ipv4.route.gc_interval = 5
    net.ipv4.route.gc_min_interval_ms = 500
    net.ipv4.route.gc_thresh = 2000000
    net.ipv4.route.max_size = 20000000
    

(I have tweaked a lot to keep t3 running when t1+t2 are sending as many packets as possible).

The result of this efforts are somewhat odd:

  • t1+t2 manage to send each about 200k packets/s. t4 in the best case sees aroung 200k in total so half of the packets are lost.
  • t3 is nearly unusable on console though packets are flowing through it (high numbers of soft-irqs)
  • the route cache garbage collector is no way near to being predictable and in the default setting overwhelmed by very few packets/s (<50k packets/s)
  • activating stateful iptables rules makes the packet rate arriving on t4 drop to around 100k packets/s, efectively losing more than 75% of the packets

And this - here is my main concern - with two old P4 machines sending as many packets as they can - which means nearly everyone on the net should be capable of this.

So here goes my question: Did I overlook some importand point in the config or in my test setup? Are there any alternatives for building firewall system especially on smp systems?


Solution 1:

I would migrate to Kernel >= 3.6 which no longer have a routing cache. That should solve a part of your problems.