Performance issues after Proxmox 7 migration

Vladimir Bulgaru

Well-Known Member
Jun 1, 2019
216
61
48
37
Moscow, Russia
Hello!

I was wondering if Proxmox 7 has some architectural changes that need to be stored on VM and CT volumes.
My situation is the following:
  1. I have VMs and CTs stored on a PCIe SSD and OS is stored on the disk raid
  2. I have reinstalled the OS entirely to Proxmox 7 and restored the conf files of VMs and CTs
  3. I have relaunched all the VMs and CTs
It so happens that i see a huge performance drop on a Windows 10 VM and can't quite put a finger on it.

I am curious if this approach does not work well when upgrading the OS and i should've opted for a backup restore. I know that restoring the backup is safer and recommended approach, but what i'm interested to hear is how is my approach wrong, in case it is wrong.

Best!
 
That's a valid approach, as long as your disk images stay intact. How is the performance drop noticeable? (I/O, memory, CPU, graphics performance, etc...)

If possible, please post a VM config (qm config <vmid>) and your storage config (/etc/pve/storage.cfg).
 
That's a valid approach, as long as your disk images stay intact. How is the performance drop noticeable? (I/O, memory, CPU, graphics performance, etc...)

If possible, please post a VM config (qm config <vmid>) and your storage config (/etc/pve/storage.cfg).
Hello, @Stefan_R

Thank you for taking your time to address this. I guess a more detailed description of the situation is important, especially since downgrading Proxmox helped address the issue.

The setup is the following - I am storing the VMs and CTs on a Fusion-Io PCIe SSD, added as a LVM-thin storage. In both Proxmox 6 and 7 the drivers seem to work perfectly fine. The experiment has been done on the same machine, so faulty hardware in case of Proxmox 7 is out of the picture.

Same Window 10 VM works great on Proxmox 6. On Proxmox 7 it was lagging nightmarishly. It was so bad that even opening the File Explorer could take up to 20-30 seconds.

Currently the 2 strongest hypotheses are:
  1. Faulty Fusion-Io driver for Debian 11. What makes me doubt it is that the drivers are open-sourced and found on GitHub. I assume if there was an issue, it would have been reported by now;
  2. Some weird scheduler issue that bottlenecks the tasks at the hypervisor level. What makes me think this is a more likely cause is the fact that other Windows 10 VMs with less load seem to perform normally. The issues arise when the usage increases.
Very weird issue. In order to debug it further, I will try creating the VMs on a ZFS RAID made of spinning disks and compare the performance there. If the issue persists, it's almost 100% a Proxmox issue. Otherwise - it is an issue caused by the drivers.
 
Last edited:
You could also try booting PVE 7 with the 5.4 kernel, if you upgraded from 6 you should still have it in your boot menu to be selected.
 
Hi Vladimir,

I went back to kernel 5.11.x after 5.13.x and 5.15.x (all on PVE 7.x) gave me a terrible VM performance. I use NFS to access my VMs on a stripe-mirror TrueNAS. With kernel >=5.13 a single VM needs up to 30secs to boot. With the old kernel only around 10secs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!