Network - High Frequenz VM

Baader-IT

Active Member
Oct 29, 2018
49
1
28
41
Hi,

we want to build a high frequenz VM which have about ~300.000.000 incoming Data-Packages á day.

Now we have a big problem because there is a big packet loss (~1,5% - 3.000.000) but the packages arrive at the host (the Switch is not the Problem).
(We guess perhaps network buffering ?!)

Info to the VM:
-64GB RAM
-32 CPU's (2 sockets, 16 cores)
- NVME Harddisk

What we already did:
- VM with virtio network device
- Multiqueues = 8 (+ ethtool -L eth1 combined 8)

1) Do anyone can give me some infos, how I can reduce the packet loss or improve my network performance for this VM?

2) Is there a way to set the Multiqueues higher than 8? I can only set 8...
[root@vm ~]# ethtool -l eth1
Channel parameters for eth1:
Pre-set maximums:
RX: 0
TX: 0
Other: 0
Combined: 8
Current hardware settings:
RX: 0
TX: 0
Other: 0
Combined: 8


Greetings,
Tobi
 
Last edited:
If I set the Multiqueues over my VM config file I can set them to an other value.

net1: virtio=F6:55:F2:37:1B:11,bridge=vmbr10,queues=16


Is there a way to change them over the GUI ?
 
You could try the following:
  • Disable Hyper-Threading in the host's BIOS
  • Pin the vCPUs to cores that are on the same processor the host NIC connects to. Reference the motherboard's block datagram
  • There was talk that it's better to assign CPUs rather than cores to VMs but I assume that relates to users defining VMs with eg 8 cores where they should rather assign 2 virtual CPUs with 4 cores each to better map to physical host
  • Increase VMs NIC queue length to buffer bursts of packets. This increase latency but can avoid packet loss

Herewith notes on how to increase the virtual NIC transmit buffer for VMs:
Code:
Increasing tap interface buffer:
  pico /etc/udev/rules.d/60-tap.rules
    KERNEL=="tap*", RUN+="/sbin/ip link set %k txqueuelen 25000"
  Changing existing tap interfaces:
    for dev in `ip a | grep tap | perl -pe 's/.*(tap.*?):.*/\1/'`; do ifconfig $dev txqueuelen 25000; done                                                          
  Changing receive buffer:
    pico /etc/sysctl.conf
      # Increase receive buffer queue
      net.core.netdev_max_backlog = 25000
    sysctl -p;
    sysctl -a | grep max_backlog;


You can also consider increasing the physical NIC buffers:
Code:
/etc/rc.local:
# Increase ring buffers:
[ `cut -d. -f1 /proc/uptime` -le 600 ] && for f in eth0 eth1 eth2 eth3; do ethtool -G $f rx 4096 tx 4096; sleep 5; done

Check current and max NIC ring buffer:
ethtool -g eth0

Get NIC statistics:
ethtool -S eth0

PS: Physical NICs typically have 64 queues which are equally allocated to cores. You most probably don't have buffering problems on the host. You can review 'ifconfig' to review drop or overrun counters.
 
How can I pin the vCPUs to cores ?

You should define NUMA regions to match the physical host's and draw CPUs from pools. Documentation is unfortunately very sparse (numaN):
https://pve.proxmox.com/wiki/Manual:_qm.conf

The following sample, most relevant somewhere like rc.local, would then need to run after starting the guest:
Code:
cpus='4-19,24-39';
for pid in `pidof kvm`; do
  taskset -a -cp $cpus $pid;
  for vhostpid in `pidof vhost-$pid`; do
    taskset -a -cp $cpus $vhostpid;
  done
done
for pid in `pidof ceph-fuse ceph-mon ceph-osd`; do
  taskset -a -cp $cpus $pid;
done
# Assign reserved CPUs to 'syrex-sip':
for pid in `qm list | grep -P '\ssyrex-sip\s' | awk '{print $6}'`; do
  taskset -a -cp 0-3,20-23 $pid
  for vhostpid in `pidof vhost-$pid`; do
    taskset -a -cp $cpus $vhostpid;
  done
done
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!