The network rate limit is not working

powersupport

Active Member
Jan 18, 2020
273
2
38
29
Hi,

we have VM that using outgong traffic arounf 140 mbp/s, which is much higher that our limits, we have set many values in the netwok limit such as 12.5 ,5 0.5 etc none seems working, can anyone advice on this? tried different models virtIO, intel E1000 all.



++++
qm config 119
agent: 1
boot: c
bootdisk: scsi0
cipassword: **********
ciuser: root
cores: 1
ide0: local-lvm:vm-119-cloudinit,media=cdrom,size=4M
ide2: none,media=cdrom
ipconfig0: ip
memory: 1024
name: test.com
net0: e1000=BA:D5:26:19:31:2E,bridge=vmbr0,rate=0.5
numa: 1
ostype: l26
parent: AsmCldo-new
scsi0: local-lvm:vm-119-disk-0,discard=on,size=20G
scsihw: virtio-scsi-pci
smbios1: uuid=7e077122-cb73-4d4d-9759-744cd8f6ff7c
sockets: 1
vmgenid: aae877db-03f9-46e7-a3a3-b1e8ed3487dc

Thank you
 

Attachments

  • 624537e9-0151-442f-a351-726fc8d97167.png
    624537e9-0151-442f-a351-726fc8d97167.png
    20.3 KB · Views: 14
  • 94f63248-15a9-4c5a-82fa-d41af8ee680d.png
    94f63248-15a9-4c5a-82fa-d41af8ee680d.png
    33.1 KB · Views: 13
it works as expected here.. (note that the value in the config/UI is MB/s (bytes), whereas traffic is usually shown in Mb/s (bits) - but it's not in RRD, that uses bytes as well ;)

you can verify that the limit is in place with "tc" (replace the VMID with the one you are interested in):

Code:
$ tc qdisc | grep tap121212
qdisc htb 1: dev tap121212i0 root refcnt 2 r2q 10 default 0x1 direct_packets_stat 0 direct_qlen 1000
qdisc ingress ffff: dev tap121212i0 parent ffff:fff1 ----------------
$ tc class show dev tap121212i0
class htb 1:1 root prio 0 rate 4194Kbit ceil 4194Kbit burst 1Mb cburst 1599b

the rate of 4194Kbit is roughly 0.5MB/s, and should change if you modify the rate in the VM config over the UI.

I also did a test with iperf in both directions, it is properly throttled at around 4Mb (0.5MB) per second in each direction, and the graph in the UI reflects that (maxing out at ~500kB/s), even if using multiple connections.
 
HI,


Could it be a flaw in PVE version 8? Despite setting lower values, the usage consistently reaches around 140 MB/s, and the usage seems to persist regardless of the values we configure.

@fabian
 
please do what I wrote above and check that the "tc" values are set as expected (please post the config values and the tc output!)
 
hello, I am experiencing similar issue and since op has not posted any data I can help with that.

After setting 100MB/s limit throttling appears to take effect. But actual average throughput is around 30MB/s or 270Mbps, which does not seem to match the below output.
root@pve:~# tc class show dev tap106i0
class htb 1:1 root prio 0 rate 838860Kbit ceil 838860Kbit burst 1Mb cburst 1468b

After setting 150MB/s limit throttling appears to no longer take effect and throughput peaks at 980Mbps while transferring a file.
root@pve:~# tc class show dev tap106i0
class htb 1:1 root prio 0 rate 1090Mbit ceil 1090Mbit burst 1048397b cburst 1363b

Additional consideration for my case is that I am transferring a large file between the physical Windows 10 PC and the VM Windows 10. Perhaps sending data between VMs behaves differently.

System info:
proxmox-ve: 8.2.0 (running kernel: 6.8.12-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-7

VM network adapter: virtio
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!