ZFS alternative for better VM IOPS

Jan 24, 2023
21
1
3
Hi there,
This is my current setup:
CPU: AMD Ryzen 5 2600
RAM: 4x16GB DDR4 3200MHz
GPU: None
Storage: 1x128GB SSD(Where proxmox is installed), 5x8TB Ironwolf NAS drives (Setup as RAIDZ2)
Motherboard: Gigabyte X470 Aorus Ultra Gaming

I store all data, VMs, LXCs, templates, ISO files, everything, on the zpool of the 5 drives.

Previously I used Windows (Installed on SSD), and setup the drives as a RAID5 to have good speed.
I moved on to Proxmox to be able to do more with the OS, with less overhead and higher reliability.
My problem right now is that I have terrible performance on all VMs, IO delay is always very high and my VMs run very slowly.

Based on this thread, and the information available on this wikipage the reason it's running slowly is because RAIDZ has the estimated IOPS of a single drive no matter the amount of drives.
My question is then what should I be using instead? What is the alternative? From my understanding RAIDZ is RAID5, but apparently it has terrible IOPS.
I understand I can use RAID10, which will have the approximate performance of 2 drives, is there anything better?
Is there any option to do something similar as in Windows where I could do a RAID5?

Thank you in advance.
 
Last edited:
and setup the drives as a RAID5 to have good speed
With a hardware RAID controller, you probably also had a battery pack for it. This way it could acknowledge the writes once they were cached, as the batterypack (inside the server), provided enough power to complete these writes, even if the power would be lost.

the information available on this wikipage
I assume you mean the "ZFS Raid level considerations" chapter? Performance is one reason why a raidz pool is not a good idea. Unexpected higher space usage for VMs is another (see that chapter for more details).

We recommend for VMs to create a raid 10 like pool. This means, that the pool consists of multiple RAID 1 / mirror VDEVs.
If some unexpected space usage is no problem for you, you could also create the pool with more than one raidz VDEV. This needs to be done on the CLI though.
Do not expect any wonders though because HDDs are not good for random IO. And with many VMs, random IO is usually the bottleneck you will see, before bandwidth becomes an issue.
 
With a hardware RAID controller, you probably also had a battery pack for it. This way it could acknowledge the writes once they were cached, as the batterypack (inside the server), provided enough power to complete these writes, even if the power would be lost.


I assume you mean the "ZFS Raid level considerations" chapter? Performance is one reason why a raidz pool is not a good idea. Unexpected higher space usage for VMs is another (see that chapter for more details).

We recommend for VMs to create a raid 10 like pool. This means, that the pool consists of multiple RAID 1 / mirror VDEVs.
If some unexpected space usage is no problem for you, you could also create the pool with more than one raidz VDEV. This needs to be done on the CLI though.
Do not expect any wonders though because HDDs are not good for random IO. And with many VMs, random IO is usually the bottleneck you will see, before bandwidth becomes an issue.
My setup is incredibly simple.
There's no RAID controller, it's only a simple computer with an SSD and 5 HDDs plugged into the Motherboard directly. The only "extra" part is that it's plugged into a UPS.
So in short what you're saying is if I want to use Proxmox, the most performance I can get out of it is using ZFS with RAID10? (With 1-2 drive redundancy)
 
Last edited:
HDDs are really terrible for storing VMs, at least when not striped dozens of them. I personally would get two enterprise grade SSDs that can fit your virtual system disks as well as your PVE installation. Then you get IOPS performance of 100x to 10000x HDDs for the stuff that needs to be fast. The HDD Pool I then would use as cold storage (storing your movies, pictures or whatever you need all that space for). You could also tell the PVE installer to not allocate all the SSDs space and later manually create an additional partition on that unallocated space. These partitions then can be added as special vdevs mirror to your HDD pool to speed it up a bit. Then your HDDs only need to store data and won't be hit by metadata IO anymore.
 
Last edited:
HDDs are really terrible for storing VMs, at least when not striped dozens of them. I personally would get two enterprise grade SSDs that can fit your virtual system disks as well as your PVE installation. Then you get IOPS performance of 100x to 10000x HDDs for the stuff that needs to be fast. The HDD Pool I then would use as cold storage (storing your movies, pictures or whatever you need all that space for). You could also tell the PVE installer to not allocate all the SSDs space and later manually create an additional partition on that unallocated space. These partitions then can be added as special vdevs mirror to your HDD pool to speed it up a bit. Then your HDDs only need to store data and won't be hit by metadata IO anymore.
Thank you. It seems I might need to upgrade. At the very least buy a Hardware RAID controller to avoid ZFS entirely.
I've also finally started looking into power efficiency, which my current hardware is most definitely not.

Appreciate the response.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!