VM disks stripping strategy

advark

New Member
Jun 7, 2024
5
2
3
Hi everyone,

Proxmox newbie here. I got a few questions regarding vm disk management.

Context:
The main usage for my PVE, in a home environment, is mainly to host a file server and a multimedia server (DNLA-like), plus a few other small VMs for testing and playing around. The PVE has the following disk setup:
  • 4x 1TB WD Red
    • 2TB ZFS-mirror for Proxmox
    • 2TB ZFS-mirror for ISO and VM config files
  • 4x 2TB WD Red
    • ZRAID1 (diskpool1: ± 6TB)
  • 4x 4TB WD Red
    • ZRAID1 (diskpool2: ± 12TB)
  • 4x 6TB WD Red
    • ZRAID1 (diskpool3: ± 18TB)
  • 4x 8TB WD Red
    • ZRAID1 (diskpool4: ± 24TB)
Each group of 4 disks is attached to a SAS controller using 1 channel (4 SAS lanes) per group. My initial idea was to use a whole diskpool for a server, e.g.diskpool3 for the file server and vmdisk4 for the multimedia server, using multiple smaller VM disks (like 2 or 4TB each) to make it easier to move it around if needs be; and the use a string raid of filesystem so the VM see it as 1 disk. I think this approach will make it easier to increase disk space in future and I don't see the need of redundancy inside the VM since the physical disks already use some king of redundancy. Critical data on the file server is rsync'ed to a NAS hourly and a daily offsite backup is in place.

Question 1:
I think my initial approach is wrong because if 2 disks of the underlying RAIDZ array fail (i.e. diskpool3) , the whole VM data is lost. What I'm thinking now, instead, is the use the same approach but spreading the VM disks throughout all disk pool. For example: vm-100-disk0 on diskpool1, vm-100-disk1 on diskpool2, etc. This way, if 1 physical disk fails in every diskpool, the VM should survive. Right?

Question 2:
Is redundancy needed inside the VM filesystem? I don't think so since the physical disks are already (somewhat) redundant.

Question 3:
What filesystem/technology should I be using for stripping (i.e. no redundancy) inside the VMs? Here's my thoughts:
  • ZFS does not like it when the disk is filled over 80-90%. Since each diskpool is build on ZFS I don't see the need to waste another 10-20% inside a VM. Am I wrong?
  • LVM can be cumbersome to manager for a newbie like me. Lots of command to learn (I know, google is your friend but...).
  • mdraid can do RAID-0 and is pretty easy to manage using mdadm or Webmin. I have used it in the for RAID-5 and RAID-10, but the performance is killing when it enters its check mode every month. I don't if that check mode exists in RAID-0. Anyone?
  • BTRFS is another option but I have not used it extensively and never in a stripe/RAID-like mode. I've read here and there that RAID-5 and 6 are not quite reliable as of now, so I have some concerns.
Are there other filesystem/technology that I'm missing?

Any suggestions/recommendations are welcome.
Thanks