Network - High Frequenz VM

Discussion in 'Proxmox VE: Networking and Firewall' started by Baader-IT, Dec 7, 2018.

  1. Baader-IT

    Baader-IT New Member
    Proxmox VE Subscriber

    Joined:
    Oct 29, 2018
    Messages:
    20
    Likes Received:
    0
    Hi,

    we want to build a high frequenz VM which have about ~300.000.000 incoming Data-Packages á day.

    Now we have a big problem because there is a big packet loss (~1,5% - 3.000.000) but the packages arrive at the host (the Switch is not the Problem).
    (We guess perhaps network buffering ?!)

    Info to the VM:
    -64GB RAM
    -32 CPU's (2 sockets, 16 cores)
    - NVME Harddisk

    What we already did:
    - VM with virtio network device
    - Multiqueues = 8 (+ ethtool -L eth1 combined 8)

    1) Do anyone can give me some infos, how I can reduce the packet loss or improve my network performance for this VM?

    2) Is there a way to set the Multiqueues higher than 8? I can only set 8...
    [root@vm ~]# ethtool -l eth1
    Channel parameters for eth1:
    Pre-set maximums:
    RX: 0
    TX: 0
    Other: 0
    Combined: 8
    Current hardware settings:
    RX: 0
    TX: 0
    Other: 0
    Combined: 8


    Greetings,
    Tobi
     
    #1 Baader-IT, Dec 7, 2018
    Last edited: Dec 7, 2018
  2. Baader-IT

    Baader-IT New Member
    Proxmox VE Subscriber

    Joined:
    Oct 29, 2018
    Messages:
    20
    Likes Received:
    0
    If I set the Multiqueues over my VM config file I can set them to an other value.

    net1: virtio=F6:55:F2:37:1B:11,bridge=vmbr10,queues=16


    Is there a way to change them over the GUI ?
     
  3. David Herselman

    David Herselman Active Member
    Proxmox VE Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    179
    Likes Received:
    38
    You could try the following:
    • Disable Hyper-Threading in the host's BIOS
    • Pin the vCPUs to cores that are on the same processor the host NIC connects to. Reference the motherboard's block datagram
    • There was talk that it's better to assign CPUs rather than cores to VMs but I assume that relates to users defining VMs with eg 8 cores where they should rather assign 2 virtual CPUs with 4 cores each to better map to physical host
    • Increase VMs NIC queue length to buffer bursts of packets. This increase latency but can avoid packet loss

    Herewith notes on how to increase the virtual NIC transmit buffer for VMs:
    Code:
    Increasing tap interface buffer:
      pico /etc/udev/rules.d/60-tap.rules
        KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 25000"
      Changing existing tap interfaces:
        for dev in `ip a | grep tap | perl -pe 's/.*(tap.*?):.*/\1/'`; do ifconfig $dev txqueuelen 25000; done                                                          
      Changing receive buffer:
        pico /etc/sysctl.conf
          # Increase receive buffer queue
          net.core.netdev_max_backlog = 25000
        sysctl -p;
        sysctl -a | grep max_backlog;

    You can also consider increasing the physical NIC buffers:
    Code:
    /etc/rc.local:
    # Increase ring buffers:
    [ `cut -d. -f1 /proc/uptime` -le 600 ] && for f in eth0 eth1 eth2 eth3; do ethtool -G $f rx 4096 tx 4096; sleep 5; done
    Check current and max NIC ring buffer:
    ethtool -g eth0

    Get NIC statistics:
    ethtool -S eth0

    PS: Physical NICs typically have 64 queues which are equally allocated to cores. You most probably don't have buffering problems on the host. You can review 'ifconfig' to review drop or overrun counters.
     
  4. David Herselman

    David Herselman Active Member
    Proxmox VE Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    179
    Likes Received:
    38
  5. spirit

    spirit Well-Known Member

    Joined:
    Apr 2, 2010
    Messages:
    3,233
    Likes Received:
    119
    whats is the pps rate per second ?

    if you don't need live migration, you could try pci passthrough too.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. Baader-IT

    Baader-IT New Member
    Proxmox VE Subscriber

    Joined:
    Oct 29, 2018
    Messages:
    20
    Likes Received:
    0
    We need live migration because of our HA strategy.

    The pps rate wasn't analysed.
    Why do you need this info?
    I will analyse the pps rate with our network team.

    Good Info!
    We will try this asap and will give you a feedback.
    Thanks!
     
  7. Baader-IT

    Baader-IT New Member
    Proxmox VE Subscriber

    Joined:
    Oct 29, 2018
    Messages:
    20
    Likes Received:
    0
    How can I pin the vCPUs to cores ?
    Didn't find anything in the Proxmox documentation.
     
  8. David Herselman

    David Herselman Active Member
    Proxmox VE Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    179
    Likes Received:
    38
    You should define NUMA regions to match the physical host's and draw CPUs from pools. Documentation is unfortunately very sparse (numaN):
    https://pve.proxmox.com/wiki/Manual:_qm.conf

    The following sample, most relevant somewhere like rc.local, would then need to run after starting the guest:
    Code:
    cpus='4-19,24-39';
    for pid in `pidof kvm`; do
      taskset -a -cp $cpus $pid;
      for vhostpid in `pidof vhost-$pid`; do
        taskset -a -cp $cpus $vhostpid;
      done
    done
    for pid in `pidof ceph-fuse ceph-mon ceph-osd`; do
      taskset -a -cp $cpus $pid;
    done
    # Assign reserved CPUs to 'syrex-sip':
    for pid in `qm list | grep -P '\ssyrex-sip\s' | awk '{print $6}'`; do
      taskset -a -cp 0-3,20-23 $pid
      for vhostpid in `pidof vhost-$pid`; do
        taskset -a -cp $cpus $vhostpid;
      done
    done
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice