VM cross nodes 10Giga

Discussion in 'Proxmox VE: Networking and Firewall' started by Irek Zayniev, Feb 11, 2019.

  1. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    Hello!
    Maybe it is no new and there is something already available but i cant find. Please point me into the right direction.

    What we have:
    VMs, debian 9.7 with virtIO newtwork interfaces 16 vCPU(host) 32Giga RAM.
    VM to VM on the same node from the same vmbr about 12Giga.
    Host to Host throw interface which is for vmbr about 12 Giga
    VM to VM from different nodes about 3 Giga.


    What is wrong?
     
  2. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,286
    Likes Received:
    369
    How do you benchmark exactly?

    Results from host to host?

    And please post your:
    > pveversion -v

    And your hardware details and network settings.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    iperf without additional keys
    hot to host same 12G

    pveversion -v

    proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve)

    pve-manager: 5.3-9 (running version: 5.3-9/ba817b29)

    pve-kernel-4.15: 5.3-2

    pve-kernel-4.15.18-11-pve: 4.15.18-33

    pve-kernel-4.15.18-10-pve: 4.15.18-32

    pve-kernel-4.15.17-3-pve: 4.15.17-14

    ceph: 12.2.10-pve1

    corosync: 2.4.4-pve1

    criu: 2.11.1-1~bpo90

    glusterfs-client: 3.8.8-1

    ksm-control-daemon: not correctly installed

    libjs-extjs: 6.0.1-2

    libpve-access-control: 5.1-3

    libpve-apiclient-perl: 2.0-5

    libpve-common-perl: 5.0-45

    libpve-guest-common-perl: 2.0-20

    libpve-http-server-perl: 2.0-11

    libpve-storage-perl: 5.0-37

    libqb0: 1.0.3-1~bpo9

    lvm2: 2.02.168-pve6

    lxc-pve: 3.1.0-2

    lxcfs: 3.0.2-2

    novnc-pve: 1.0.0-2

    proxmox-widget-toolkit: 1.0-22

    pve-cluster: 5.0-33

    pve-container: 2.0-34

    pve-docs: 5.3-2

    pve-edk2-firmware: 1.20181023-1

    pve-firewall: 3.0-17

    pve-firmware: 2.0-6

    pve-ha-manager: 2.0-6

    pve-i18n: 1.0-9

    pve-libspice-server1: 0.14.1-2

    pve-qemu-kvm: 2.12.1-1

    pve-xtermjs: 3.10.1-1

    qemu-server: 5.0-46

    smartmontools: 6.5+svn4324-1

    spiceterm: 3.0-5

    vncterm: 1.5-3
     
  4. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    ethtool ens8f1

    Settings for ens8f1:

    Supported ports: [ FIBRE ]

    Supported link modes: 1000baseT/Full

    10000baseT/Full

    Supported pause frame use: Symmetric Receive-only

    Supports auto-negotiation: No

    Advertised link modes: 10000baseT/Full

    Advertised pause frame use: No

    Advertised auto-negotiation: No

    Speed: 10000Mb/s

    Duplex: Full

    Port: FIBRE

    PHYAD: 1

    Transceiver: internal

    Auto-negotiation: off

    Supports Wake-on: d

    Wake-on: d

    Current message level: 0x00000000 (0)



    Link detected: yes
     
  5. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,899
    Likes Received:
    163
    How are your VMs configured (qm config <vmid>)?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    qm config 127

    bootdisk: scsi0
    cores: 16
    cpu: host
    memory: 32768
    name: cbDataNode1
    net0: virtio=AE:94:23:09:34:53,bridge=vmbr0
    net1: virtio=7A:40:3B:74:28:A7,bridge=vmbr3
    numa: 0
    ostype: l26
    scsi0: VMs_vm:vm-127-disk-0,size=64G
    scsihw: virtio-scsi-pci
    smbios1: uuid=67ed6a2b-e8dd-407c-bb7f-6199d69664c0
    sockets: 1
    vmgenid: abf3ad68-5459-4743-b8e4-64d1a17315f4

    vmbr3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
    inet 10.200.201.80 netmask 255.255.252.0 broadcast 10.200.203.255
    inet6 fe80::202:c9ff:fe53:5daa prefixlen 64 scopeid 0x20<link>
    ether 00:02:c9:53:5d:aa txqueuelen 1000 (Ethernet)
    RX packets 258950695 bytes 1019629223710 (949.6 GiB)
    RX errors 0 dropped 763462 overruns 0 frame 0
    TX packets 210040479 bytes 1241068828835 (1.1 TiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    in interfaces config
    auto vmbr3
    iface vmbr3 inet static
    address 10.200.201.80
    netmask 255.255.252.0
    bridge_ports ens8
    bridge_stp off
    bridge_fd 0
    pre-up ip link set ens8 mtu 9000

    ifconfig ens8
    ens8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000

    ether 00:02:c9:53:5d:aa txqueuelen 1000 (Ethernet)
    RX packets 337445531 bytes 1073919215988 (1000.1 GiB)
    RX errors 0 dropped 109355 overruns 109355 frame 0
    TX packets 378711760 bytes 1274689718263 (1.1 TiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    ethtool ens8
    Settings for ens8:
    Supported ports: [ FIBRE ]
    Supported link modes: 10000baseT/Full
    Supported pause frame use: No
    Supports auto-negotiation: No
    Advertised link modes: 10000baseT/Full
    Advertised pause frame use: No
    Advertised auto-negotiation: No
    Speed: 10000Mb/s
    Duplex: Full
    Port: FIBRE
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: off
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x00000014 (20)
    link ifdown
    Link detected: yes
     
  7. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,899
    Likes Received:
    163
    How is the opposite side configured?

    The overruns usually occur when the RX buffer on the NIC can't be drained quick enough by the kernel.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    Yes. it can be if it is not ipv6.
    It is proxmox kernel. What to check? What to do?
     
  9. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,899
    Likes Received:
    163
    Shoudn't make a difference.

    Question or statement? You may always install a different kernel for reference. Our kernel is based on Ubuntu's and we put some patches ontop.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    Sure we are using kernel with your patches
    4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100) x86_64

    I did check buffers

    ethtool -g ens8
    Ring parameters for ens8:
    Pre-set maximums:
    RX: 8192
    RX Mini: 0
    RX Jumbo: 0
    TX: 8192
    Current hardware settings:
    RX: 1024
    RX Mini: 0
    RX Jumbo: 0
    TX: 1024

    What else can be an issue?
     
  11. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,899
    Likes Received:
    163
    Do you have the firewall activated? What is the exact iperf command used?

    Depending on the CPU, you should get more then 12 GbE while running the iperf from VM <-> VM on the same node.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    No firewall.
    Hosts CPUs are 4x Xeon E7 48xxx, totally 4 sockets.
    How it can be more than 12 if driver is 10.
    iperf -c ...
     
  13. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,899
    Likes Received:
    163
    Please provide the full information, when looking from the outside it just gives me a black box.

    And I was speaking about the traffic between two VMs on the same node. Where no physical interface is involved, so it is CPU bound. Besides speaking of achiving 12 GbE with a 10 GbE NIC. :cool:
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    What additional information do you need?
    Please ask.
     
  15. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,899
    Likes Received:
    163
    Well.

    If you post thing like this, then please post the full version, as this leaves anyone clueless to what CPU might be in use exactly.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  16. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    CPU is not an issue we have 40 cores per node. RAM is not too. We have 512G-1T per node. We are trying with 1VM per node. Inside VM with 16 vCPU and 32RAM there is just iperf.
     
  17. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,899
    Likes Received:
    163
    iperf by default runs single threaded, so the frequency has a big role here.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. Irek Zayniev

    Irek Zayniev New Member

    Joined:
    May 29, 2018
    Messages:
    14
    Likes Received:
    0
    we are using host cpu in VMs and same test (iperf params) from hosts.
    Why in VM
    ens19: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
    inet 10.202.200.160 netmask 255.255.255.0 broadcast 10.202.200.255
    inet6 fe80::7840:3bff:fe74:28a7 prefixlen 64 scopeid 0x20<link>
    ether 7a:40:3b:74:28:a7 txqueuelen 1000 (Ethernet)
    RX packets 29487420 bytes 12148370464 (11.3 GiB)
    RX errors 0 dropped 114576 overruns 0 frame 0
    TX packets 12600969 bytes 61210080719 (57.0 GiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
     
  19. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,899
    Likes Received:
    163
    The dropped packet counter shows the packets that where received but not intended for this interface. Play with the iperf (man iperf) and interface settings and see if you can push more bandwidth through.

    EDIT: You may also try to change the CPU type of the VMs and as you use a NUMA system, to also activate it.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    #19 Alwin, Feb 14, 2019 at 14:48
    Last edited: Feb 14, 2019 at 16:24
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice