Hi everyone,
Proxmox newbie here. I got a few questions regarding vm disk management.
Context:
The main usage for my PVE, in a home environment, is mainly to host a file server and a multimedia server (DNLA-like), plus a few other small VMs for testing and playing around. The PVE has the following disk setup:
Question 1:
I think my initial approach is wrong because if 2 disks of the underlying RAIDZ array fail (i.e. diskpool3) , the whole VM data is lost. What I'm thinking now, instead, is the use the same approach but spreading the VM disks throughout all disk pool. For example: vm-100-disk0 on diskpool1, vm-100-disk1 on diskpool2, etc. This way, if 1 physical disk fails in every diskpool, the VM should survive. Right?
Question 2:
Is redundancy needed inside the VM filesystem? I don't think so since the physical disks are already (somewhat) redundant.
Question 3:
What filesystem/technology should I be using for stripping (i.e. no redundancy) inside the VMs? Here's my thoughts:
Any suggestions/recommendations are welcome.
Thanks
Proxmox newbie here. I got a few questions regarding vm disk management.
Context:
The main usage for my PVE, in a home environment, is mainly to host a file server and a multimedia server (DNLA-like), plus a few other small VMs for testing and playing around. The PVE has the following disk setup:
- 4x 1TB WD Red
- 2TB ZFS-mirror for Proxmox
- 2TB ZFS-mirror for ISO and VM config files
- 4x 2TB WD Red
- ZRAID1 (diskpool1: ± 6TB)
- 4x 4TB WD Red
- ZRAID1 (diskpool2: ± 12TB)
- 4x 6TB WD Red
- ZRAID1 (diskpool3: ± 18TB)
- 4x 8TB WD Red
- ZRAID1 (diskpool4: ± 24TB)
Question 1:
I think my initial approach is wrong because if 2 disks of the underlying RAIDZ array fail (i.e. diskpool3) , the whole VM data is lost. What I'm thinking now, instead, is the use the same approach but spreading the VM disks throughout all disk pool. For example: vm-100-disk0 on diskpool1, vm-100-disk1 on diskpool2, etc. This way, if 1 physical disk fails in every diskpool, the VM should survive. Right?
Question 2:
Is redundancy needed inside the VM filesystem? I don't think so since the physical disks are already (somewhat) redundant.
Question 3:
What filesystem/technology should I be using for stripping (i.e. no redundancy) inside the VMs? Here's my thoughts:
- ZFS does not like it when the disk is filled over 80-90%. Since each diskpool is build on ZFS I don't see the need to waste another 10-20% inside a VM. Am I wrong?
- LVM can be cumbersome to manager for a newbie like me. Lots of command to learn (I know, google is your friend but...).
- mdraid can do RAID-0 and is pretty easy to manage using mdadm or Webmin. I have used it in the for RAID-5 and RAID-10, but the performance is killing when it enters its check mode every month. I don't if that check mode exists in RAID-0. Anyone?
- BTRFS is another option but I have not used it extensively and never in a stripe/RAID-like mode. I've read here and there that RAID-5 and 6 are not quite reliable as of now, so I have some concerns.
Any suggestions/recommendations are welcome.
Thanks