Hi,
Similar post/situation to https://forum.proxmox.com/threads/a...wn-due-to-different-volume-group-name.144067/ - looking for some feedback. I have a 5-node cluster of HP 360 and 380 Gen9 servers. All of them have access to shared storage (NFS) where most of the guest vms reside. 4 of the 5 nodes are using their native 440AR storage controllers for RAID1 on the mirror that I installed PVE on. On node 5, I switched the controller to "HBA mode" so that I can use ZFS for local storage (storage for some vms that I'm not yet comfortable putting on the not-super-fast NFS device).
As the 440AR is not super-sophisticated it is "all or nothing" for doing RAID vs. HBA so my boot array (300GB x2 SAS) is also directly exposed to PVE for this node. During install I created a RAID1 mirror with ZFS and it's working fine but (similar to the above referenced post) once I joined it to the cluster I now see the node 5 > local-lvm in an unknown state and I get the "no such logical volume pve/data..." message if I click the "local-lvm" node under node 5. All the other nodes are showing as normal/ok for local-lvm.
Again, I understand why this has happened, I'm ok with it, and I'm prepared to run the "pvesm set local-lvm --nodes [...]" command as suggested in the response from "bbgeek17" on 3/28/2024 to hide the non-existent local-lvm reference - are there any "gotcha's" to doing this? The cluster is a production system and I can't really have the whole thing reset, reboot, or do other unexpected, weird stuff that affects the guests on the cluster - any advice?
Thanks - really enjoying the product!
Similar post/situation to https://forum.proxmox.com/threads/a...wn-due-to-different-volume-group-name.144067/ - looking for some feedback. I have a 5-node cluster of HP 360 and 380 Gen9 servers. All of them have access to shared storage (NFS) where most of the guest vms reside. 4 of the 5 nodes are using their native 440AR storage controllers for RAID1 on the mirror that I installed PVE on. On node 5, I switched the controller to "HBA mode" so that I can use ZFS for local storage (storage for some vms that I'm not yet comfortable putting on the not-super-fast NFS device).
As the 440AR is not super-sophisticated it is "all or nothing" for doing RAID vs. HBA so my boot array (300GB x2 SAS) is also directly exposed to PVE for this node. During install I created a RAID1 mirror with ZFS and it's working fine but (similar to the above referenced post) once I joined it to the cluster I now see the node 5 > local-lvm in an unknown state and I get the "no such logical volume pve/data..." message if I click the "local-lvm" node under node 5. All the other nodes are showing as normal/ok for local-lvm.
Again, I understand why this has happened, I'm ok with it, and I'm prepared to run the "pvesm set local-lvm --nodes [...]" command as suggested in the response from "bbgeek17" on 3/28/2024 to hide the non-existent local-lvm reference - are there any "gotcha's" to doing this? The cluster is a production system and I can't really have the whole thing reset, reboot, or do other unexpected, weird stuff that affects the guests on the cluster - any advice?
Thanks - really enjoying the product!