[SOLVED] diskspace vs. io performance while deciding of the pool architecture of a pve server

Elleni

Renowned Member
Jul 6, 2020
248
25
68
52
We bought a new server from HPE (Gen11) which is equiped with 12 nvme 1.92 TB Disks. Now I am asking myself wether to configure it for maximum diskspace - like 11x in a raidz5 plus 1 hostspare vs. IO-performance - something like 5striped vdevs à 2 disks in mirror config plus 2 hotspares. So while evaluating the need of the business (space vs. performance of guests) I thought,

I should ask here as its a once in a longtime oportunity to do some tests to see the difference of the prerformance to check if its worth to config it like this. On the other hand we are talking about nvme's which are rather fast, aren't they so I tought maybe a big vdev in raidz5 might be quick enough in terms of IO performance to run lets say 10 to 15 VM guests simulatniously.

As I read the IO performance of one vdev is approximately at the same speed as one nvme disk, it might be best creating something like 2 striped vdevs with 5 disks plus one hotspare in raidz5 config or three striped raiz5 vdevs containing 4 nvme's each? That way I would loose less diskspace but have 2 or 3 vdevs so approximatelly the speed of 2-3 nvme disks?

If I would like to go through the hassle of testing it, how would such a test have to look like?
 
Last edited:
There will be few workloads that will be able to stress out your NVMe lanes. Unless you have that kind of workload, I would suggest 2x6 RAIDZ2 if you only have 1 server - it is safer to have less disks in a VDEV, it is still plenty fast. Don’t run anything with less than 2 drives of redundancy.

There is no such thing as RAIDZ5, that would imply 5 drives of redundancy?
 
Last edited:
  • Like
Reactions: UdoB and Elleni
Hi, yes thats a typo; meant the raidz1 - I thought, it might be worth a shot to go with 3 vdevs - 4 drives each in raidz1 - and those 3 vdevs striped. If needing to reconfigure, we will have a backupserver so the pool could be re-created and then restore the vms to the new pool.