Bandwidth limit not working for outgoing traffic

pfoo

Active Member
Jan 21, 2012
29
3
43
It seems like bandwidth limit are not working for outgoing traffic using latest proxmox 4 from repository and latest kernel

Relevant VM config :
net0: virtio=42:01:22:B8:D3:BD,bridge=vmbr1,rate=20

Bridge is openvswitch type

iperf3 trafic to limited VM is capped :
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 194 MBytes 163 Mbits/sec 0 sender
[ 4] 0.00-10.00 sec 194 MBytes 163 Mbits/sec receiver

iperf3 trafic from limited VM is NOT capped :
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 43.8 GBytes 37.7 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 43.8 GBytes 37.7 Gbits/sec receiver

tc rules :
root@hv:~# tc qdisc ls dev tap102i0
qdisc htb 1: root refcnt 2 r2q 10 default 1 direct_packets_stat 0 direct_qlen 1000
root@hv:~# tc class ls dev tap102i0
class htb 1:1 root prio 0 rate 167772Kbit ceil 167772Kbit burst 1Mb cburst 1572b
root@hv:~# tc filter ls dev tap102i0
(empty)

According to /usr/share/perl5/PVE/Network.pm we are missing at least :
- one qdisc rule
- one filter rule
- I don't known if this script is still used, or if it has been replaced by /usr/share/perl5/PVE/API2/Network.pm

Based on this part of the perl script :
run_command("/sbin/tc qdisc add dev $iface handle ffff: ingress");
run_command("/sbin/tc filter add dev $iface parent ffff: " .
"prio 50 basic " .
"police rate ${rate}bps burst ${burst}b mtu 64kb " .
"drop flowid :1");

I tried this :
tc qdisc add dev tap102i0 handle ffff: ingress
tc filter add dev tap102i0 parent ffff: prio 50 basic police rate 160Mbit burst 1Mb mtu 64kb drop flowid :1

This allowed capping for outgoing traffic, but the rate seems to be wrong
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 47.5 MBytes 39.9 Mbits/sec 2445 sender
[ 4] 0.00-10.00 sec 47.1 MBytes 39.5 Mbits/sec receiver

pveversion :
proxmox-ve: 4.2-58 (running kernel: 4.4.13-2-pve)
pve-manager: 4.2-17 (running version: 4.2-17/e1400248)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-1-pve: 4.4.13-56
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.8-1-pve: 4.4.8-52
pve-kernel-4.4.13-2-pve: 4.4.13-58
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.4.10-1-pve: 4.4.10-54
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve2
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-43
qemu-server: 4.0-85
pve-firmware: 1.1-8
libpve-common-perl: 4.0-71
libpve-access-control: 4.0-18
libpve-storage-perl: 4.0-56
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6-1
pve-container: 1.0-71
pve-firewall: 2.0-29
pve-ha-manager: 1.0-33
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
openvswitch-switch: 2.5.0-1

Is this a bug ?
Does anyone has a working bandwidth limit for both ingoing and outgoing traffic ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!