Proxmox VE 7.4 extremely poor performance

in addition to disable hypervisorlaunchtype off,
Do not forget to disable "Core Isolation" from Windows Security -> Device Security
 
Can you maybe also check the pressure stats? They can often give a hint of potential bottlenecks:

Code:
head /proc/pressure/*
I ran the command but have no idea what I'm looking at:

root@ph1:~# cat /proc/pressure/*
some avg10=0.00 avg60=0.00 avg300=0.00 total=15951938652
full avg10=0.00 avg60=0.00 avg300=0.00 total=0
some avg10=0.02 avg60=0.03 avg300=0.05 total=349012117492
full avg10=0.02 avg60=0.03 avg300=0.04 total=334395138043
some avg10=0.00 avg60=0.00 avg300=0.00 total=2281296739
full avg10=0.00 avg60=0.00 avg300=0.00 total=2258731369
 
2x Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz

ProLiant DL380p Gen8
This is another host ?! in first post, you speak about two ProLiant G9 & Shared Storage on a TrueNAS.
anycase, these old cpus need mitigations disabled as @davemcl says

FWIW, I've switched from HyperV to PVE on two HP ProLiant ML350 G8 , one with E5-2620 ( RDS VM + SQL VM ) the other with E5-2620v2, users didn't notice differences. and with now a PBS alongside PVE , I'm confident if host die to restore on fresh HW.
 
This is another host ?! in first post, you speak about two ProLiant G9 & Shared Storage on a TrueNAS.
anycase, these old cpus need mitigations disabled as @davemcl says

FWIW, I've switched from HyperV to PVE on two HP ProLiant ML350 G8 , one with E5-2620 ( RDS VM + SQL VM ) the other with E5-2620v2, users didn't notice differences. and with now a PBS alongside PVE , I'm confident if host die to restore on fresh HW.
I Mis-spoke in the first post - thought they were gen 9's they are both gen 8's.

And this migration was from Hyper-V on a Dell R510 to Proxmox on these HPs ... definitely should not be seeing a slowdown I wouldn't think.
 
in addition of disabling mitigations, I think storage is the bottleneck, because 2 hdd as local drive even with FBWC can't be responsive running Wn10 guests. Test with a Datacenter SSD locally attached, more over outside the HP SmartArray, if on embedded SATA controller , don't forget to re-enable Disk Write Cache in BIOS.

Shared Storage need to be tested too , with another current gen HW host.
I'm not ZFS user, but ZFS R1 is slow , write speed is same as a single drive + I'm not sure about what is "Cache VDEV on a 120GB SSD",
 
Last edited:
in addition of disabling mitigations, I think storage is the bottleneck, because 2 hdd as local drive even with FBWC can't be responsive running Wn10 guests. Test with a Datacenter SSD locally attached, more over outside the HP SmartArray, if on embedded SATA controller , don't forget to re-enable Disk Write Cache in BIOS.

Shared Storage need to be tested too , with another current gen HW host.
I'm not ZFS user, but ZFS R1 is slow , write speed is same as a single drive + I'm not sure about what is "Cache VDEV on a 120GB SSD",
The only thing that was meant for the local storage was just the pve install. Does the hypervisor install drive impact the operation of the guests?

On the ZFS you can have a cache drive and a zlog (I think it is) drive. The cache is running on the SSD. The ZFS is made up of 10x 3TB spinning 7200 RPM SAS drives. It is running on one of the Dell R510s with a Perc 200 running in IT mode. It is running on TrueNAS. The shared storage is set up on 10GE networking but I'm pretty sure it's the disk channel, not the network that is slow.

I knew that it was a slow option because of the age of the hardware and the drives, but I didn't think it would impact it THAT much to where it takes 5 minutes to log in on a Windows 10 guest.
 
The only thing that was meant for the local storage was just the pve install. Does the hypervisor install drive impact the operation of the guests?
Yes when host need to SWAP on it when it does not have enough RAM
On the ZFS ... 10x 3TB spinning 7200 RPM SAS drives ... but I didn't think it would impact it THAT much to where it takes 5 minutes to log in on a Windows 10 guest.
I think you can get this 5 minutes Win10 boot delay with a bad ZFS configuration , like RAID Z1 on HDD (I'm not sure cache can do magic here , this is another ZFS topic optimization ...)

You need to try and compare each combination with same VM. ( same/another host , local/shared storage , current/fresh VM, ssd/hdd storage )
Then each combination with a fresh VM.

There isn't magic setting to speed up all things.
 
Last edited:
I reinstalled both boxes with no swap and the issue is resolved. I have no idea why swap would be causing this but they are just as fast as Hyper-V was now. The boxes have plenty enough ram that swap is not necessary.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!