Very slow SSD performance (but not with Windows directly installed)

Jason2312312

New Member
Apr 2, 2024
20
2
3
Hello,

I bought two used Dell R630 servers with perc h730 mini controller, 2x Xeon E5-2690 V4 CPU and 768gb DDR4 memory. The memory is a bit overkill, but I got a really good deal so I am not complaining. With Proxmox I notice very poor SSD performance within VMs, I did not yet test the host itself with FIO but, when installing Windows 10 directly on the host, the SSD performance is as expected (I can post some CrystalDiskmark results later, but its a +- 5x improvement to running it within the VM)

What I tested:
Proxmox with a virtual RAID 1 disk (using controller)
ZFS RAID 1 (controller in HBA mode)
Proxmox on seperate HDDs in ZFS RAID1, SSD in ZFS RAID1

In all cases the performance is very poor, latency skyrockets and it feels slower than running on an old HDD.

SSDs used are Crucial MX500, type TLC.

Since I am able to get good performance within Windows when directly installed I conclude that the issue is with Proxmox, perhaps a driver I need to set? Any suggestions on how to troubleshoot this further is highly appreciated.


Edit: Made it more clear what I already tried
 
Last edited:
SSD is ok for Proxmox. It's important that you don't use any RAID-functionalities/controllers if you go with ZFS RAID (and this is what i would suggest).

https://pve.proxmox.com/wiki/ZFS_on_Linux => "Do not use ZFS on top of a hardware RAID controller which has its own cache management. ZFS needs to communicate directly with the disks. An HBA adapter or something like an LSI controller flashed in “IT” mode is more appropriate."

So there are different tests you can do: Adjust your Controller to play "dumb" and don't use any logics. Often it's called "jbod" or "AHCI".
Or even better... just use the onboard SAS/SATA Controller for some other testings with a ZFS RAID. With ZFS you don't need expensive RAID-controllers, but maybe a bit more RAM...
 
+ as each week notice :
don't except performance with zfs on non-datacenter drives.

go with the default Proxmox ext4 (and VM storage will be Lvmthin volumes) with regulars ssd drives.
 
SSD is ok for Proxmox. It's important that you don't use any RAID-functionalities/controllers if you go with ZFS RAID (and this is what i would suggest).

https://pve.proxmox.com/wiki/ZFS_on_Linux => "Do not use ZFS on top of a hardware RAID controller which has its own cache management. ZFS needs to communicate directly with the disks. An HBA adapter or something like an LSI controller flashed in “IT” mode is more appropriate."

So there are different tests you can do: Adjust your Controller to play "dumb" and don't use any logics. Often it's called "jbod" or "AHCI".
Or even better... just use the onboard SAS/SATA Controller for some other testings with a ZFS RAID. With ZFS you don't need expensive RAID-controllers, but maybe a bit more RAM...
Doesn't HBA mode 'bypass' the controller completely which means that JBOD or AHCI won't make a difference?

+ as each week notice :
don't except performance with zfs on non-datacenter drives.

go with the default Proxmox ext4 (and VM storage will be Lvmthin volumes) with regulars ssd drives.

I thought I already tested this but I will try it again and report back. I would loose redundancy though, and it doesn't seem like I can replicate on the same host.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!