Speed network Virtual Machine

zebr

New Member
Apr 18, 2023
7
0
1
Hello! I have installed Proxmox VE.
The server has 2 network cards of 2x10Gb/s.
One link was taken from each card and a bond was assembled between them.
When testing iperf3, the speed on the bond shows 17-18Gb/s. And when you install a VM on this server, the speed of iperf3 shows 2x5 Gb / s max. How to achieve that the channel would be loaded completely?
 
Hi,

Can you share the result of the iperf3 tests, and the VM you tested config qm config <VMID> and the output of pveversion -v as well?
 
root@pnode15:~# qm config 552
acpi: 1
agent: 1
args:
autostart: 0
balloon: 0
bios: seabios
boot: cnd
bootdisk: scsi0
ciuser: devops
cores: 1
cpu: kvm64
cpuunits: 1000
description: %0A %D0%9E%D1%82%D0%B2%D0%B5%D1%82%D1%81%D1%82%D0%B2%D0%B5%D0%BD%D0%BD%D1%8B%D0%B9%3A Vahromeev %0A%0A %D0%9B%D0%BE%D0%B3%D0%B8%D0%BD/%D0%9F%D0%B0%D1%80%D0%BE%D0%BB%D1%8C%3A %0A antana/samara123 %0A%0A %D0%9E%D0%BF%D0%B8%D1%81%D0%B0%D0%BD%D0%B8%D0%B5%3A
ide2: CEPH-SSD2:vm-552-cloudinit,media=cdrom,size=4M
ipconfig0: ip=10.14.246.101/24,gw=10.14.246.1
kvm: 1
machine: q35
memory: 2048
name: forvahromeev1.10.14.246.101
nameserver: 172.21.22.10 172.21.22.11 8.8.8.8
net0: virtio=B2:9C:0D:EC:16:64,bridge=vmbr1,tag=246
onboot: 1
ostype: l26
parent: afcr
scsi0: CEPH-SSD2:vm-552-disk-0,aio=threads,cache=writeback,discard=on,iothread=1,size=55G,ssd=1
scsihw: virtio-scsi-single
searchdomain: cism-ms.ru cism-ms.develop cism-ms.stage cism-ms.s3
serial0: socket
smbios1: uuid=6a9d8f4a-1be0-48d6-b5e7-3b85cd464716
sockets: 2
sshkeys: ssh-rsa%20AAAAB3NzaC1yc2EAAAADAQABAAABgQDBjzRt%2Fb5Xe%2FtgQS2rvOBXOSBq1hychcnbz6G4m9Ps6hQXCxLA0hcrzPIRGazeWEslqsBynSm4fVJC6zAnExEd7KsNxS5gsMxmcHsghuU6%2FIA62tP8w8tXKEWaCGMQyfcUO%2FMIrdEjAg8txl3FIxdlcYwBTLW9nJggOmUn9w1YOA6ECNBDUbTwZC62yomJhQoAK0W%2BuVkKSLTqRIvd0oZJEF%2B0dtzBrhhe7cjR6fuoLpkB1%2FQ9bQImVfAxEiiExhFWFMxcyf4SGxpmsbKI4rJ3eBvsMmhrX76p1bYX4fKGiBaqNyXqThYWYybXfDfaITQR87SIrVt4U4NzS79ZFfQ142VPs%2BYISiy%2F%2B%2FVKZ1NjHo1fRZJSqBtsWJCsvtuM6C2%2BdRZ0JqwwMlHKLNhmerYMLJMQxkxdB5jRxafC%2B3T0aFNIsFIa7MdC8i3WQBk5z5huY5pslkPWnmKTfCi3gLjWNhfW9xEgKAww6hGrZR%2FzlQXZQrmQ2LGspzXFngd9tmk%3D%20%20linux%40key%0Assh-rsa%20AAAAB3NzaC1yc2EAAAABJQAAAQEAgBBDmFt%2FseAl7LqaJLETe%2BmGKJkwmWGe1jw1i2EwL3tq1dsHYayEUwHH9%2BBUH4hn3Vp%2FntMtAsibc9EbbihCkE4erDf7gJUWgXD6QPboqpcPxFDSPA7pDsT44y5KLYJ90tbWwg76p%2Ftouh4kjC0azNHr7bW96q6S1HYHTb0NMd%2BIvWYHPBwSqeeo1Iw1juCUsLE9PODyIqBrOsRNO4p%2Bapu11yuyohnZCdOM7chN4YtzLrSoAxWRxZpd5rlcHXboj3N763efRFrhD7Z8XkWp%2B3C3%2F5eKD%2BJkGyZF9Qrr718M8cYCrOyDUCBqGzTLqiIxYItpIy3xbMMLlmBCAVZRqQ%3D%3D%20putty%40key
tablet: 0
template: 0
vga: std
vmgenid: 7a88bf0a-00cd-49b6-b5ad-098c3341d4c7



root@pnode15:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-helper: 7.3-2
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph: 15.2.17-pve1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.4
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1














root@pnode06:~# qm config 684
acpi: 1
agent: 1
args:
autostart: 0
balloon: 0
bios: seabios
boot: cnd
bootdisk: scsi0
ciuser: devops
cores: 1
cpu: kvm64
cpuunits: 1000
description: %0A %D0%9E%D1%82%D0%B2%D0%B5%D1%82%D1%81%D1%82%D0%B2%D0%B5%D0%BD%D0%BD%D1%8B%D0%B9%3A Vahromeev %0A%0A %D0%9B%D0%BE%D0%B3%D0%B8%D0%BD/%D0%9F%D0%B0%D1%80%D0%BE%D0%BB%D1%8C%3A %0A antana/samara123 %0A%0A %D0%9E%D0%BF%D0%B8%D1%81%D0%B0%D0%BD%D0%B8%D0%B5%3A
ide2: CEPH-SSD2:vm-684-cloudinit,media=cdrom,size=4M
ipconfig0: ip=10.14.246.102/24,gw=10.14.246.1
kvm: 1
machine: q35
memory: 2048
name: forvahromeev2.10.14.246.102
nameserver: 172.21.22.10 172.21.22.11 8.8.8.8
net0: virtio=D2:01:DA:37:D4:62,bridge=vmbr1,tag=246
onboot: 1
ostype: l26
parent: afcr
scsi0: CEPH-SSD2:vm-684-disk-0,aio=threads,cache=writeback,discard=on,iothread=1,size=55G,ssd=1
scsihw: virtio-scsi-single
searchdomain: cism-ms.ru cism-ms.develop cism-ms.stage cism-ms.s3
serial0: socket
smbios1: uuid=d0492794-752e-4c6d-bcc9-e9530071ffba
sockets: 2
sshkeys: ssh-rsa%20AAAAB3NzaC1yc2EAAAADAQABAAABgQDBjzRt%2Fb5Xe%2FtgQS2rvOBXOSBq1hychcnbz6G4m9Ps6hQXCxLA0hcrzPIRGazeWEslqsBynSm4fVJC6zAnExEd7KsNxS5gsMxmcHsghuU6%2FIA62tP8w8tXKEWaCGMQyfcUO%2FMIrdEjAg8txl3FIxdlcYwBTLW9nJggOmUn9w1YOA6ECNBDUbTwZC62yomJhQoAK0W%2BuVkKSLTqRIvd0oZJEF%2B0dtzBrhhe7cjR6fuoLpkB1%2FQ9bQImVfAxEiiExhFWFMxcyf4SGxpmsbKI4rJ3eBvsMmhrX76p1bYX4fKGiBaqNyXqThYWYybXfDfaITQR87SIrVt4U4NzS79ZFfQ142VPs%2BYISiy%2F%2B%2FVKZ1NjHo1fRZJSqBtsWJCsvtuM6C2%2BdRZ0JqwwMlHKLNhmerYMLJMQxkxdB5jRxafC%2B3T0aFNIsFIa7MdC8i3WQBk5z5huY5pslkPWnmKTfCi3gLjWNhfW9xEgKAww6hGrZR%2FzlQXZQrmQ2LGspzXFngd9tmk%3D%20%20linux%40key%0Assh-rsa%20AAAAB3NzaC1yc2EAAAABJQAAAQEAgBBDmFt%2FseAl7LqaJLETe%2BmGKJkwmWGe1jw1i2EwL3tq1dsHYayEUwHH9%2BBUH4hn3Vp%2FntMtAsibc9EbbihCkE4erDf7gJUWgXD6QPboqpcPxFDSPA7pDsT44y5KLYJ90tbWwg76p%2Ftouh4kjC0azNHr7bW96q6S1HYHTb0NMd%2BIvWYHPBwSqeeo1Iw1juCUsLE9PODyIqBrOsRNO4p%2Bapu11yuyohnZCdOM7chN4YtzLrSoAxWRxZpd5rlcHXboj3N763efRFrhD7Z8XkWp%2B3C3%2F5eKD%2BJkGyZF9Qrr718M8cYCrOyDUCBqGzTLqiIxYItpIy3xbMMLlmBCAVZRqQ%3D%3D%20putty%40key
tablet: 0
template: 0
vga: std
vmgenid: 76cfc5ae-eac8-4340-ad0a-e97588d62958


root@pnode06:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-helper: 7.3-2
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph: 15.2.17-pve1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.4
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1
 
iperf3 -c 10.14.246.101 -P 50
[SUM] 0.00-10.00 sec 12.0 GBytes 10.3 Gbits/sec 29571 sender
[SUM] 0.00-10.01 sec 11.9 GBytes 10.2 Gbits/sec receiver
 
If you leave 1 link, then 10Gb / s goes through it. And if you enable 2, then each goes 5Gb / s
 
The traffic is balanced between switches well. But in total it does not raise more than 10 Gb / s. Gives a maximum of 2x5Gb / s
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!