Traffic shaping on Linux with HTB: weird results

I think I kinda sorta fixed the issue: I needed to tie the qdiscs/classes to an IMQ device rather than an ETH device. Once I did that, the shaper started working.

However!

While I could get the shaper to limit traffic incoming to a machine, I couldn't get it to split traffic fairly (even though I've attached a SFQ to my HTB).

What happened is this: I started a download; it got limited to 75Kbyte/s. Now, when I started a second download, instead of evenly splitting traffic between the 2 DL sessions (35Kbyte/s + 35Kbyte/s), it just barely dropped speed on session one and gave session two a meager 500b/s. After a couple minutes, the split settled on something like 65Kbyte/s + 10 Kbyte/s. indignantly That's not fair! :)

So I dismantled my config, went ahead and set up ClearOS 5.2 (a Linux distro with a pre-built firewall system) that has a traffic shaper module. The module uses an HTB + SFQ setup very similar to what I configured by hand.

Same fairness issue! The overall limit is enforced well, but there is no fairness! - two downloads share in the same weird proportion 65/15 proportion rather than 35/35.

Any ideas, guys?


Try using this example instead:

# tc qdisc add dev eth1 root handle 1: htb default 10

# tc class add dev eth1 parent 1: classid 1:1 htb rate 75Kbit
# tc class add dev eth1 parent 1:1 classid 1:10 htb rate 1Kbit ceil 35Kbit
# tc class add dev eth1 parent 1:1 classid 1:20 htb rate 35kbit

# tc qdisc add dev eth1 parent 1:10 handle 10: sfq perturb 10
# tc qdisc add dev eth1 parent 1:20 handle 20: sfq perturb 10

# tc filter add dev eth1 parent 1:0 protocol ip prio 1 u32 \
    match ip dst 10.41.240.240 flowid  1:20

This creates an htb bucket with rate limit of 75Kbit/s, then it creates two sfq (a fair queing qdisc) underneath that.

By default, everyone will be in the first queue, with a guaranteed rate of 1Kbit and a max rate of 30Kbit. Now, your ip of 10.41.240.240 will be guaranteed 35Kbit and can take as much as 75Kbit if the non-selected traffic is utilized. Two connections from .240 should average out and be the same per connection, and a connection between a .240 and a non .240 will parallel at a 35:1 ratio between queues.

I see this has been dead since Apr... so hopefully this info is still of value to you.


This may be related to this:

From: http://www.shorewall.net/traffic_shaping.htm

A Warning to Xen Users

If you are running traffic shaping in your dom0 and traffic shaping doesn't seem to be limiting outgoing traffic properly, it may be due to "checksum offloading" in your domU(s). Check the output of "shorewall show tc". Here's an excerpt from the output of that command:

class htb 1:130 parent 1:1 leaf 130: prio 3 quantum 1500 rate 76000bit ceil 230000bit burst 1537b/8 mpu 0b overhead 0b cburst 1614b/8 mpu 0b overhead 0b level 0 
 Sent 559018700 bytes 75324 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 299288bit 3pps backlog 0b 0p requeues 0 
 lended: 53963 borrowed: 21361 giants: 90174
 tokens: -26688 ctokens: -14783

There are two obvious problems in the above output:

  • The rate (299288) is considerably larger than the ceiling (230000).
  • There are a large number (90174) of giants reported.

This problem will be corrected by disabling "checksum offloading" in your domU(s) using the ethtool utility. See the one of the Xen articles for instructions.