Multiple RAID arrays on Proxmox

technewbie

New Member
Sep 19, 2021
5
0
1
Hi everyone,

I have some questions about the ZFS RAID configuration on Proxmox.

Is ıt possible to create multiple ZFS RAID with different disks ?

What I'm trying to do is install Proxmox on 2 NVME disks as RAID0 and install PVE on this array,
Then create a RAID10 on 4 units of 2.5" SSDs,
Then create a RAID10 on 4 units of 3.5" HDDs.

Then create VDisks on these arrays and attach them to VMs.

Is this configuration possible on Proxmox ? and if it is, could someone help me how to configure this 3 different RAID arrays ?

Thanks
 
Last edited:
Hi everyone,

I have some questions about the ZFS RAID configuration on Proxmox.

Is ıt possible to create multiple ZFS RAID with different disks ?

What I'm trying to do is install Proxmox on 2 NVME disks as RAID0 and install PVE on this array,
Then create a RAID10 on 4 units of 2.5" SSDs,
Then create a RAID10 on 4 units of 3.5" HDDs.

Then create VDisks on these arrays and attach them to VMs.

Is this configuration possible on Proxmox ? and if it is, could someone help me how to configure this 3 different RAID arrays ?

Thanks

Hi yes this is possible. PVE ZFS RAID can be done via Installation Wizard. Remember RAID0 if a disk fails, your proxmox ve base system + vms you put there will be destroyed.

RAID10 can be made via gui and automatically be added as storage: on vm creation you then can select where to put the vm.

1632222246742.png
 
  • Like
Reactions: technewbie
I do almost exactly this. I would also strongly recommend a raid1 (mirror ) for the 2 NVME Proxmox boot disk and not raid0 (just be careful because Proxmox + ZFS on root writes a lot of data with additional write amplification that will send consumer grade nvme/SSD to an early grave). I have a raid1 2 disk ZFS mirror for the boot drive and a zfs raid10 for everything else. I have additional 2 disc that I am contemplating setting up Ceph on.

As stated above with a 2 disc raid0 proxmox boot disc your if either nvme fails your machine is dead. With a 2 disc Raid1 proxmox boot it would require both nvme to die before the Proxmox node is dead. If one nvme fails with a ZFS 2 disk raid1 you run fine on 1 nvme then replace the failed nvme on your time line and after that pretty quickly get the new nvme up and running as viable mirrored boot disk.
 
Last edited:
  • Like
Reactions: technewbie
And if you want more control over how the ZFS pools are being created you can also manually create them using the zpool command via CLI. That way you get way more options like adding read/write cache drives, adding a special device for metadata, more complex drive configurations like a striped raid5 or three-way-mirrors and so on. You can then add that manually created ZFS pool to PVE via GUI by using Datacenter -> Storage -> Add -> ZFS.