VM performance degradation after Proxmox upgrade to 6.2-4

sif7en

Active Member
Nov 18, 2019
5
1
43
48
Hi,
After Proxmox cluster upgrade to 6.2.4 it was noticed a VM performance degradation in terms of process execution. I have checked disk performance in bellow example using iostat command:


iostat -x sda 2 6

Linux 4.15.0-99-generic 05/19/20 _x86_64_ (4 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
0.43 0.01 0.19 0.68 0.00 98.68

Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
sda 13.39 2.62 130.77 1189.32 0.22 4.91 1.64 65.18 2.26 134.24 0.35 9.77 453.18 0.84 1.35

avg-cpu: %user %nice %system %iowait %steal %idle
0.12 0.00 0.12 0.00 0.00 99.75

iostat -d 2
Linux 4.15.0-99-generic 05/19/20 _x86_64_ (4 CPU)

Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 8 0
scd0 0.07 0.28 0.00 1024 0
sda 16.21 133.08 1199.70 486767 4388228

And id doesn,t seem we have a problem with read and write from/to disk.

We don't seem to consume CPU and memory beyond capacity limits:

mpstat -P ALL
Linux 4.15.0-99-generic 05/19/20 _x86_64_ (4 CPU)

10:40:13 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
10:40:13 all 0.20 0.01 0.10 0.28 0.00 0.00 0.00 0.00 0.00 99.41
10:40:13 0 0.33 0.00 0.11 0.40 0.00 0.00 0.00 0.00 0.00 99.16


cat /proc/meminfo
MemTotal: 4038608 kB
MemFree: 1069468 kB
MemAvailable: 3496792 kB
Buffers: 171400 kB
Cached: 2427548 kB
SwapCached: 0 kB
Active: 512500 kB
Inactive: 2203336 kB
Active(anon): 117296 kB
Inactive(anon): 312 kB
Active(file): 395204 kB
Inactive(file): 2203024 kB

Here is VM configuration:

boot: c
bootdisk: scsi0
cores: 2
ide2: main:vm-2003-cloudinit,media=cdrom
ipconfig0: ip=xx.xx.xx.xx/24,gw=xx.xx.xx.xx
memory: 4096
name: test
nameserver: xx.xx.xx.xx
net0: virtio=02:BA:BA:4C:82:10,bridge=vmbr2
numa: 1
onboot: 1
scsi0: main:base-901-disk-0/vm-2003-disk-0,size=20G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=d9a25d9a-1757-4071-8616-8aede3d0f59c
sockets: 2

scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=38e69b4e-d9b9-4cf6-a75a-6e62d1a8ee42
sockets: 1


Does anybody have an idea what could impact the performance after upgrade ?
 
I'm using low end PC as server for home lab experiments and I'm experiencing the same - performance issue of 6.2.4 release. Rolling back to 6.1 ...
 
Same here. I make a comparative in versions performance.
The best version so far in performance was pve-manager/6.1.8/806edfe1 running kernel 5.3.18-3-pve. Iscsi performance and CPU inside vms was the best.
The worst was 6.1-5/9bf06119 running kernel 5.3.13-1 with Very low performance inside vms (2680v2 Lost to e5-2667 (v1)).
I suspect about the Intel driver pstate because in the best version the cpu clock is always higher then other versions. I try to disable with no Lucky (New grub LINE and etc).
 
Please, make this test.

Stop all vms on proxmox host.
After, do:
modprobe -r kvm_intel
modprobe kvm_intel enable_apicv=N
cat /sys/module/kvm_intel/parameters/enable_apicv

The output must be N.
Start vms again. Report the result.