local-lvm on software raid 1

Jun 9, 2025
16
2
3
Hi,

Working with a repurposed HPE DL380 Gen9 system and have the RAID controller in HBA mode so as to properly use zfs. ProxMox v9.0.10 installed on zfs RAID1 (300GB SAS disks x2) and I have another 12 1.2TB SAS disks available, of which I plan on using 10 of them in a zfs RAID array for certain guests that I don't want to run on the shared storage I currently have (might be some CEPH experimentation on this one as well).

I plan on adding this system to my cluster and want to be able to migrate guests to / from it as needed. However, for guests that have EFI or TPM disks, I'm storing those disks on local-lvm (cluster nodes all have hardware RAID1 so I get local-lvm by default). This way, I don't have any issues with migration, snapshots, etc. Because this new node doesn't have local-lvm (it has local-zfs) anything with an EFI or TPM disk won't be able to be migrated.

I can certainly add a single disk as LVM but was hoping to have RAID1 for redundancy on local-lvm. I don't currently have any means (that I know of) of adding local-zfs to the current cluster members; they don't have extra unused disks. I did a little research and found that there is an "unsupported" way to add local-lvm on a software RAID1 array - looking for general thoughts, etc. about that method (mdadm + lvm2, etc.).

Thanks....
 
ZFS just for the Proxmox installation doesn't make much sense (in my opinion).
I suggest to turn back on the raid functionality, create a RAID1 for Proxmox and a RAID10 for your VMs.
I hope it is a RAID controller with cache and battery.
 
  • Like
Reactions: waltar