How to do traffic shaping (rate limiting) with TC per OpenVPN client
Here is a solution, how to do traffic shaping for data rate limiting of individual clients with tc
(traffic control) using a script called by OpenVPN.
The traffic control settings are handled in a script tc.sh
with the following features:
- Called by OpenVPN using directives:
up
,down
,client-connect
andclient-disconnect
- All settings are passed via environment variables
- Supports theoretically up to
/16
subnets (up to 65534 clients) - Filtering using hashing filters for very fast massive filtering
- Filters and classes are set only for clients currently connected, and are individually added and removed without affecting other
tc
settings using unique identifiers (hashtables
,handles
,classids
). These identifiers are generated from the last 16 bits of the client's remote vpn IP - Individual limiting/throttling of clients based on CN-name (client certificate common name)
- Client settings are stored in files containing their "subscription class" (
bronze
,silver
andgold
), to use other classes simply edit the script and modify as needed. - "Subscription class" and the corresponding data rate ("bandwidth") can be modified on the fly from external applications while a client is connected.
Configuration
OpenVPN server configuration /etc/openvpn/tc/conf
:
port 1194
proto udp
dev tun
sndbuf 0
rcvbuf 0
ca ca.crt
cert server.crt
key server.key
dh dh.pem
tls-auth ta.key 0
topology subnet
server 10.8.0.0 255.255.0.0
keepalive 10 60
comp-lzo
persist-key
persist-tun
status /var/log/openvpn-tc-status.log
log /var/log/openvpn-tc.log
verb 3
script-security 2
down-pre
up /etc/openvpn/tc/tc.sh
down /etc/openvpn/tc/tc.sh
client-connect /etc/openvpn/tc/tc.sh
client-disconnect /etc/openvpn/tc/tc.sh
push "redirect-gateway def1"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
Replace the DNS servers in the last 2 lines with the correct IP addresses.
Traffic control script /etc/openvpn/tc/tc.sh
:
#!/bin/bash
ipdir=/etc/openvpn/tc/ip
dbdir=/etc/openvpn/tc/db
ip="$ifconfig_pool_remote_ip"
cn="$common_name"
ip_local="$ifconfig_local"
debug=0
log=/tmp/tc.log
if [[ "$debug" > 0 ]]; then
exec >>"$log" 2>&1
chmod 666 "$log" 2>/dev/null
if [[ "$debug" > 1 ]]; then
date
id
echo "PATH=$PATH"
[[ "$debug" > 2 ]] && printenv
fi
echo
echo "script_type=$script_type"
echo "dev=$dev"
echo "ip=$ip"
echo "user=$cn"
echo "\$1=$1"
echo "\$2=$2"
echo "\$3=$3"
fi
cut_ip_local() {
if [ -n "$ip_local" ]; then
ip_local_byte1=`echo "$ip_local" | cut -d. -f1`
ip_local_byte2=`echo "$ip_local" | cut -d. -f2`
fi
[[ "$debug" > 0 ]] && echo "ip_local_byte1=$ip_local_byte1"
[[ "$debug" > 0 ]] && echo "ip_local_byte2=$ip_local_byte2"
}
create_identifiers() {
if [ -n "$ip" ]; then
ip_byte3=`echo "$ip" | cut -d. -f3`
handle=`printf "%x\n" "$ip_byte3"`
ip_byte4=`echo "$ip" | cut -d. -f4`
hash=`printf "%x\n" "$ip_byte4"`
classid=`printf "%x\n" $((256*ip_byte3+ip_byte4))`
fi
[[ "$debug" > 0 ]] && echo "ip_byte3=$ip_byte3"
[[ "$debug" > 0 ]] && echo "ip_byte4=$ip_byte4"
[[ "$debug" > 0 ]] && echo "handle=$handle"
[[ "$debug" > 0 ]] && echo "hash=$hash"
}
start_tc() {
[[ "$debug" > 1 ]] && echo "start_tc()"
cut_ip_local
echo "$dev" > "$ipdir"/dev
tc qdisc add dev "$dev" root handle 1: htb
tc qdisc add dev "$dev" handle ffff: ingress
tc filter add dev "$dev" parent 1:0 prio 1 protocol ip u32
tc filter add dev "$dev" parent 1:0 prio 1 handle 2: protocol ip u32 divisor 256
tc filter add dev "$dev" parent 1:0 prio 1 protocol ip u32 ht 800:: \
match ip dst "${ip_local_byte1}"."${ip_local_byte2}".0.0/16 \
hashkey mask 0x000000ff at 16 link 2:
tc filter add dev "$dev" parent ffff:0 prio 1 protocol ip u32
tc filter add dev "$dev" parent ffff:0 prio 1 handle 3: protocol ip u32 divisor 256
tc filter add dev "$dev" parent ffff:0 prio 1 protocol ip u32 ht 800:: \
match ip src "${ip_local_byte1}"."${ip_local_byte2}".0.0/16 \
hashkey mask 0x000000ff at 12 link 3:
}
stop_tc() {
[[ "$debug" > 1 ]] && echo "stop_tc()"
tc qdisc del dev "$dev" root
tc qdisc del dev "$dev" handle ffff: ingress
[ -e "$ipdir"/dev ] && rm "$ipdir"/dev
}
function bwlimit-enable() {
[[ "$debug" > 1 ]] && echo "bwlimit-enable()"
create_identifiers
echo "$ip" > "$ipdir"/"$cn".ip
# Find this user's bandwidth limit
[[ "$debug" > 0 ]] && echo "userdbfile=${dbdir}/${cn}"
user=`cat "${dbdir}/${cn}"`
[[ "$debug" > 0 ]] && echo "subscription=$user"
if [ "$user" == "gold" ]; then
downrate=100mbit
uprate=100mbit
elif [ "$user" == "silver" ]; then
downrate=10mbit
uprate=10mbit
elif [ "$user" == "bronze" ]; then
downrate=1mbit
uprate=1mbit
else
downrate=10kbit
uprate=10kbit
fi
# Limit traffic from VPN server to client
tc class add dev "$dev" parent 1: classid 1:"$classid" htb rate "$downrate"
tc filter add dev "$dev" parent 1:0 protocol ip prio 1 \
handle 2:"${hash}":"${handle}" \
u32 ht 2:"${hash}": match ip dst "$ip"/32 flowid 1:"$classid"
# Limit traffic from client to VPN server
# Maybe better use ifb for ingress? See: https://serverfault.com/a/386791/209089
tc filter add dev "$dev" parent ffff:0 protocol ip prio 1 \
handle 3:"${hash}":"${handle}" \
u32 ht 3:"${hash}": match ip src "$ip"/32 \
police rate "$uprate" burst 80k drop flowid :"$classid"
}
function bwlimit-disable() {
[[ "$debug" > 1 ]] && echo "bwlimit-disable()"
create_identifiers
tc filter del dev "$dev" parent 1:0 protocol ip prio 1 \
handle 2:"${hash}":"${handle}" u32 ht 2:"${hash}":
tc class del dev "$dev" classid 1:"$classid"
tc filter del dev "$dev" parent ffff:0 protocol ip prio 1 \
handle 3:"${hash}":"${handle}" u32 ht 3:"${hash}":
# Remove .ip
[ -e "$ipdir"/"$cn".ip ] && rm "$ipdir"/"$cn".ip
}
case "$script_type" in
up)
start_tc
;;
down)
stop_tc
;;
client-connect)
bwlimit-enable
;;
client-disconnect)
bwlimit-disable
;;
*)
case "$1" in
update)
[ -z "$2" ] && echo "$0 $1: missing argument [client-CN]" >&2 && exit 1
[ ! -e "$ipdir"/"$2".ip ] && \
echo "$0 $1 $2: file $ipdir/$2.ip not found" >&2 && exit 1
[ ! -e "$ipdir"/dev ] && \
echo "$0 $1: file $ipdir/dev not found" >&2 && exit 1
ip=`cat "$ipdir/$2.ip"`
dev=`cat "$ipdir/dev"`
cn="$2"
bwlimit-disable
bwlimit-enable
;;
*)
echo "$0: unknown operation [$1]" >&2
exit 1
;;
esac
;;
esac
exit 0
Make it executable:
chmod +x /etc/openvpn/tc/tc.sh
Subscription database directory /etc/openvpn/tc/db/
:
This directory contains a file per client named after its CN-name containing the "subscription class" string, configure as follows:
mkdir -p /etc/openvpn/tc/db
echo bronze > /etc/openvpn/tc/db/client1
echo silver > /etc/openvpn/tc/db/client2
echo gold > /etc/openvpn/tc/db/client3
IP database directory /etc/openvpn/tc/ip/
:
This directory will contain the CN-name <-> IP-address
relation and the tun interface
during run-time, which has to be provided for an external application updating the tc
settings while clients are connected.
mkdir -p /etc/openvpn/tc/ip
It will look as follows:
root@ubuntu:/etc/openvpn/tc/ip# ls -l
-rw-r--r-- 1 root root 9 Jun 1 08:31 client1.ip
-rw-r--r-- 1 root root 9 Jun 1 08:30 client2.ip
-rw-r--r-- 1 root root 9 Jun 1 08:30 client3.ip
-rw-r--r-- 1 root root 5 Jun 1 08:25 dev
root@ubuntu:/etc/openvpn/tc/ip# cat *
10.8.0.2
10.8.1.0
10.8.2.123
tun0
Enable IP forwarding:
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
Configuring NAT (network address translation):
If you have a static external IP address use SNAT
:
iptables -t nat -A POSTROUTING -s 10.8.0.0/16 -o <if> -j SNAT --to <ip>
Or if you have a dynamically assigned IP address use MASQUERADE
(slower):
iptables -t nat -A POSTROUTING -s 10.8.0.0/16 -o <if> -j MASQUERADE
while
-
<if>
is the name of the external interface (i.e.eth0
) -
<ip>
is the IP address of the external interface
Script usage and showing tc configuration
Updating "subscription class" and tc
settings from external application:
While the OpenVPN server is up and the client connected issue the following commands (example to upgrade client1
to "gold"
subscription):
echo gold > /etc/openvpn/tc/db/client1
/etc/openvpn/tc/tc.sh update client1
tc
commands to show the settings:
tc -s qdisc show dev tun0
tc class show dev tun0
tc filter show dev tun0
Additional information
Notes and possible optimizations:
- The script and
tc
settings were only tested using a small number of clients - Large scale testing with massive simultaneous client traffic has to be done and possibly the
tc
settings have to be optimized - I do not completely understand how the ingress settings work. They should probably be optimized with the use of
ifb
interface as explained in this answer.
Related documentation for a deeper understanding:
- Traffic Control HOWTO
- Linux Advanced Routing & Traffic Control HOWTO (especially chapter 9-12)
-
HTB Linux queuing discipline manual - user guide (very good explanation of
htb
qdisc) - TC manpage
- Identifying tc filters for
add
anddel
operations - OpenVPN 2.3 manpage