Linux qos: tc DRR qdisc does not work
There is tc queue discipline DRR(Dificite Round Robin) .
It has same capabilities, as HTB, but instead of using buckets, filled with tokens, it just assignes every queue some Dificit Counter. On packet sending DC is decreased by packet size. If DC is less than packet size, DC is increased by queue's quantum and next queue is processed. So, it can divide outgoing traffec in some ratio, without knowing channel width(which is required for HTB). See http://www.unix.com/man-page/linux/8/tc-drr/
Setup: two hosts, 172.16.1.1 и 172.16.1.2.
On first host we are listening to traffic:
nc -l 8111
nc -l 8112
On second host we check speed:
pv /dev/zero | nc 172.16.1.1 8111
pv /dev/zero | nc 172.16.1.1 8112
Now speed is equal(pv is utility allowing to measure speed of data tranfering via pipeline). Add DRR on second host(HTB at top is used to emulate real channel speed limits):
tc qdisc add dev eth0 root handle 1: htb
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit ceil 100mbit
tc qdisc add dev eth0 parent 1:1 handle 2: drr
tc class add dev eth0 parent 2: classid 2:1 drr quantum 600
tc class add dev eth0 parent 2: classid 2:2 drr quantum 1400
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dport 8111 0xffff classid 2:1
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dport 8112 0xffff classid 2:2
Speed remains equal :( What am I doing wrong?
Solution 1:
Answer: DRR does not drop packets itself. To get the desired behavior, add child qdiscs to the DRR child classes like pfifo limit 50
to cause the child qdisc to drop packets instead of queueing them effectively indefinitely. The solution was found here: linux.org.ru thread
Reference: man tc-drr
NOTES This implementation does not drop packets from the longest queue on overrun, as limits are handled by the individual child qdiscs.
Solution 2:
drr is a scheduler, you still have to allocate different bandwidth to classes with htb. I guess you though specifying quantum as 600 and 1400 will give close to 1:2 ratio. In fact it will not. You may get close to that ratio on your config only in case of congestion, e.g. create multiple UDP streams, and then measure two in question, but it's still not the thing you are expecting.