slow performance between PVE and VM\CT

ubersk

New Member
Dec 16, 2022
10
1
3
after a few month after installation i have degradation in bandwidth between PVE host and VM on windws and CT :
1.after installing and configuring PVE an ct (linux ubuntu ) i test bandwidth with iperf3 an get about 95 Gigabit/s
after few monfs i have only 60-70 Gigabits/s
ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 7.92 GBytes 68.0 Gbits/sec 0 689 KBytes
[ 5] 1.00-2.00 sec 8.24 GBytes 70.8 Gbits/sec 0 689 KBytes
[ 5] 2.00-3.00 sec 7.65 GBytes 65.7 Gbits/sec 0 724 KBytes
[ 5] 3.00-4.00 sec 7.53 GBytes 64.7 Gbits/sec 0 724 KBytes
[ 5] 4.00-5.00 sec 7.98 GBytes 68.6 Gbits/sec 0 724 KBytes
[ 5] 5.00-6.00 sec 7.29 GBytes 62.6 Gbits/sec 0 724 KBytes
[ 5] 6.00-7.00 sec 7.82 GBytes 67.2 Gbits/sec 0 724 KBytes
[ 5] 7.00-8.00 sec 8.32 GBytes 71.5 Gbits/sec 0 724 KBytes
[ 5] 8.00-9.00 sec 7.37 GBytes 63.3 Gbits/sec 0 807 KBytes
[ 5] 9.00-10.00 sec 7.29 GBytes 62.6 Gbits/sec 0 807 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 77.4 GBytes 66.5 Gbits/sec 0 sender
[ 5] 0.00-10.04 sec 77.4 GBytes 66.2 Gbits/sec receiver

iperf Done.
root@pve2:~#
What happens ? and what max bandwidth AT ALL?
On windows 11 vm after installin and use VirtioNIC i have about 95 Gigabit/s in iperf3 test on all directions .
after few months its about 19 Gbits/s
Accepted connection from 10.10.19.2, port 45550
[ 5] local 10.10.19.5 port 5201 connected to 10.10.19.2 port 45552
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 2.11 GBytes 18.2 Gbits/sec
[ 5] 1.00-2.00 sec 1.67 GBytes 14.3 Gbits/sec
[ 5] 2.00-3.00 sec 2.27 GBytes 19.5 Gbits/sec
[ 5] 3.00-4.00 sec 2.28 GBytes 19.6 Gbits/sec
[ 5] 4.00-5.00 sec 2.23 GBytes 19.1 Gbits/sec
[ 5] 5.00-6.00 sec 1.68 GBytes 14.4 Gbits/sec
[ 5] 6.00-7.00 sec 2.32 GBytes 20.0 Gbits/sec
[ 5] 7.00-8.00 sec 2.27 GBytes 19.5 Gbits/sec
[ 5] 8.00-9.00 sec 2.29 GBytes 19.7 Gbits/sec
[ 5] 9.00-10.00 sec 2.30 GBytes 19.7 Gbits/sec
[ 5] 10.00-10.04 sec 91.2 MBytes 19.9 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-10.04 sec 21.5 GBytes 18.4 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201


Whats hapen how i cane fix it / and i cant finde information about
what is the maximum throughput inside the proxmox system btween PVE and VM and between VMs in PVE ?
 
Did the load on the server increase? Passing network packets on the host itself is basically just limited by the memory and CPU speed. So if the server is doing a lot more, the performance might go down as the CPU & memory resources are used to run more guests for example.
 
Did the load on the server increase? Passing network packets on the host itself is basically just limited by the memory and CPU speed. So if the server is doing a lot more, the performance might go down as the CPU & memory resources are used to run more guests for example.
Yes, of course, I agree, the load has grown, there were already 5 guest machines, but is the bandwidth of virtual interfaces dynamic, there is no limit, as I suppose there is a limit and there is 95 Gigabit for the entire system as a whole. ?
 
But then I don’t understand why the throughput on the LXC container fell by 20-30%, while on a regular VM it fell by more than 4 times.
 
maybe there is a matrix or specification where it is clearly written that this is the limit, here it is divided by 2 every time you start a new VM?
 
I have Intel®12th Gen Processors 1 x PCIe 5.0/4.0/3.0 x16 slot(s) (NIC10GE)
Intel® Z690 Chipset**
1 x PCIe 3.0 x16 slot (supports x4 mode) (empty) 1 x PCIe 3.0 x4 slot (empty) 2 x PCIe 3.0 x1 slots (empty)
128Gbyte DDR5 4600Mhz
Manufacturer: Gigabyte Technology Co., Ltd.
Product Name: Z690 AORUS MASTER
Version: 12th Gen Intel(R) Core(TM) i7-12700K
Voltage: 1.1 V
External Clock: 100 MHz
Max Speed: 8500 MHz
Current Speed: 3600 MHz

There's a lot of bandwidth here, and I don't think 5 VM and 2 CT can reduce it that much.
 

Attachments

  • pve.png
    pve.png
    228.4 KB · Views: 11
Last edited:
Would you mind sharing your VM config? qm config $VMID
Maybe there's something we can tweak to improve performance
 
root@pve2:~# qm config 212
audio0: device=ich9-intel-hda,driver=spice
boot: order=sata0;ide2
cores: 2
ide2: none,media=cdrom
memory: 8192
meta: creation-qemu=7.1.0,ctime=1671176369
net0: virtio=72:F3:9B:F0:06:xx,bridge=vmbr1,firewall=1
net1: virtio=52:25:A6:F2:00:xx,bridge=vmbr14,firewall=1
net2: virtio=3A:2B:BE:23:09:xx,bridge=vmbr16,firewall=1
net3: virtio=5E:13:F7:E2:19:xx,bridge=vmbr19,firewall=1
numa: 1
ostype: l26
sata0: truenas2:212/vm-212-disk-0.raw,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=4a3dbee1-8387-4614-a286-3fbb69ab2d66
sockets: 1
vmgenid: ee389a9f-8bf5-44ea-94eb-ed4255ff187e
root@pve2:~#
 
Personnaly, I never see a vm reaching 90Gbit/s with only 1 queue.

Multi-queues should be enabled on nic for dispatching between vm cores, and
with iperf2 multiple connections should be used, or iperf3 multiple iperf3 process should be launched (as iperf3 is not multithreaded)
 
  • Like
Reactions: Matthias.
it could also be that a kernel or microcode update slowed down your CPU in the meantime by introducing some new mitigation ("speed" on a local bridge where packets never actually leave the system is artificial and limited by memory/cache/CPU speed of your node). that would also explain why the VM is hit more, since they usually take a bigger hit because of the additional isolation/context switches and the potential for a mismatch between host and guest kernel causing additional overhead.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!