VM cross nodes 10Giga

Irek Zayniev

New Member
May 29, 2018
19
1
3
45
Hello!
Maybe it is no new and there is something already available but i cant find. Please point me into the right direction.

What we have:
VMs, debian 9.7 with virtIO newtwork interfaces 16 vCPU(host) 32Giga RAM.
VM to VM on the same node from the same vmbr about 12Giga.
Host to Host throw interface which is for vmbr about 12 Giga
VM to VM from different nodes about 3 Giga.


What is wrong?
 
How do you benchmark exactly?

Results from host to host?

And please post your:
> pveversion -v

And your hardware details and network settings.
 
iperf without additional keys
hot to host same 12G

pveversion -v

proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve)

pve-manager: 5.3-9 (running version: 5.3-9/ba817b29)

pve-kernel-4.15: 5.3-2

pve-kernel-4.15.18-11-pve: 4.15.18-33

pve-kernel-4.15.18-10-pve: 4.15.18-32

pve-kernel-4.15.17-3-pve: 4.15.17-14

ceph: 12.2.10-pve1

corosync: 2.4.4-pve1

criu: 2.11.1-1~bpo90

glusterfs-client: 3.8.8-1

ksm-control-daemon: not correctly installed

libjs-extjs: 6.0.1-2

libpve-access-control: 5.1-3

libpve-apiclient-perl: 2.0-5

libpve-common-perl: 5.0-45

libpve-guest-common-perl: 2.0-20

libpve-http-server-perl: 2.0-11

libpve-storage-perl: 5.0-37

libqb0: 1.0.3-1~bpo9

lvm2: 2.02.168-pve6

lxc-pve: 3.1.0-2

lxcfs: 3.0.2-2

novnc-pve: 1.0.0-2

proxmox-widget-toolkit: 1.0-22

pve-cluster: 5.0-33

pve-container: 2.0-34

pve-docs: 5.3-2

pve-edk2-firmware: 1.20181023-1

pve-firewall: 3.0-17

pve-firmware: 2.0-6

pve-ha-manager: 2.0-6

pve-i18n: 1.0-9

pve-libspice-server1: 0.14.1-2

pve-qemu-kvm: 2.12.1-1

pve-xtermjs: 3.10.1-1

qemu-server: 5.0-46

smartmontools: 6.5+svn4324-1

spiceterm: 3.0-5

vncterm: 1.5-3
 
ethtool ens8f1

Settings for ens8f1:

Supported ports: [ FIBRE ]

Supported link modes: 1000baseT/Full

10000baseT/Full

Supported pause frame use: Symmetric Receive-only

Supports auto-negotiation: No

Advertised link modes: 10000baseT/Full

Advertised pause frame use: No

Advertised auto-negotiation: No

Speed: 10000Mb/s

Duplex: Full

Port: FIBRE

PHYAD: 1

Transceiver: internal

Auto-negotiation: off

Supports Wake-on: d

Wake-on: d

Current message level: 0x00000000 (0)



Link detected: yes
 
qm config 127

bootdisk: scsi0
cores: 16
cpu: host
memory: 32768
name: cbDataNode1
net0: virtio=AE:94:23:09:34:53,bridge=vmbr0
net1: virtio=7A:40:3B:74:28:A7,bridge=vmbr3
numa: 0
ostype: l26
scsi0: VMs_vm:vm-127-disk-0,size=64G
scsihw: virtio-scsi-pci
smbios1: uuid=67ed6a2b-e8dd-407c-bb7f-6199d69664c0
sockets: 1
vmgenid: abf3ad68-5459-4743-b8e4-64d1a17315f4

vmbr3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.200.201.80 netmask 255.255.252.0 broadcast 10.200.203.255
inet6 fe80::202:c9ff:fe53:5daa prefixlen 64 scopeid 0x20<link>
ether 00:02:c9:53:5d:aa txqueuelen 1000 (Ethernet)
RX packets 258950695 bytes 1019629223710 (949.6 GiB)
RX errors 0 dropped 763462 overruns 0 frame 0
TX packets 210040479 bytes 1241068828835 (1.1 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

in interfaces config
auto vmbr3
iface vmbr3 inet static
address 10.200.201.80
netmask 255.255.252.0
bridge_ports ens8
bridge_stp off
bridge_fd 0
pre-up ip link set ens8 mtu 9000

ifconfig ens8
ens8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000

ether 00:02:c9:53:5d:aa txqueuelen 1000 (Ethernet)
RX packets 337445531 bytes 1073919215988 (1000.1 GiB)
RX errors 0 dropped 109355 overruns 109355 frame 0
TX packets 378711760 bytes 1274689718263 (1.1 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ethtool ens8
Settings for ens8:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: 10000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000014 (20)
link ifdown
Link detected: yes
 
How is the opposite side configured?

RX errors 0 dropped 109355 overruns 109355 frame 0
The overruns usually occur when the RX buffer on the NIC can't be drained quick enough by the kernel.
 
Sure we are using kernel with your patches
4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100) x86_64

I did check buffers

ethtool -g ens8
Ring parameters for ens8:
Pre-set maximums:
RX: 8192
RX Mini: 0
RX Jumbo: 0
TX: 8192
Current hardware settings:
RX: 1024
RX Mini: 0
RX Jumbo: 0
TX: 1024

What else can be an issue?
 
Do you have the firewall activated? What is the exact iperf command used?

Depending on the CPU, you should get more then 12 GbE while running the iperf from VM <-> VM on the same node.
 
No firewall.
Hosts CPUs are 4x Xeon E7 48xxx, totally 4 sockets.
How it can be more than 12 if driver is 10.
iperf -c ...
 
Please provide the full information, when looking from the outside it just gives me a black box.

And I was speaking about the traffic between two VMs on the same node. Where no physical interface is involved, so it is CPU bound. Besides speaking of achiving 12 GbE with a 10 GbE NIC. :cool:
 
CPU is not an issue we have 40 cores per node. RAM is not too. We have 512G-1T per node. We are trying with 1VM per node. Inside VM with 16 vCPU and 32RAM there is just iperf.
 
we are using host cpu in VMs and same test (iperf params) from hosts.
Why in VM
ens19: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.202.200.160 netmask 255.255.255.0 broadcast 10.202.200.255
inet6 fe80::7840:3bff:fe74:28a7 prefixlen 64 scopeid 0x20<link>
ether 7a:40:3b:74:28:a7 txqueuelen 1000 (Ethernet)
RX packets 29487420 bytes 12148370464 (11.3 GiB)
RX errors 0 dropped 114576 overruns 0 frame 0
TX packets 12600969 bytes 61210080719 (57.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 
The dropped packet counter shows the packets that where received but not intended for this interface. Play with the iperf (man iperf) and interface settings and see if you can push more bandwidth through.

EDIT: You may also try to change the CPU type of the VMs and as you use a NUMA system, to also activate it.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!