Hi
@mikewich, welcome to the forum. You sound a bit frustrated, I get it, rebuilding nodes is never fun.
To help effectively, we'll need to see the actual system state from each node. A few things worth keeping in mind: FC connectivity and Multipath are standard Linux administration, not PVE-specific. PVE runs on Debian with an Ubuntu-derived kernel, so if your storage vendor has a Linux guide, that's a great reference.
First, verify the kernel sees the disk on each node
Run lsscsi and lsblk on all three nodes and share the output (use CODE tags or attach as text files). Also worth checking dmesg and journalctl for any FC-related messages, especially on the two problem nodes.
Second
, ensure that "lsscsi/lsblk" show duplicate devices and proceed with Multipath configuration using any available Linux guide. Your storage vendor is preferred here. The fact that Multipath output changed on the fly may indicate an FC layer issue. "dmesg" is a good place to check for any errors here. Once you have stable MP config, adding "multipath -ll" from each node will assist the forum members in getting the full picture.
Third
, Once multipath is stable, you can start with LVM Physical Volume/Volume Group setup. The LVM has to be layered on top of a DM device, not an underlying sdX. If you've run this a few times already during troubleshooting, running wipefs on the volume before trying again will help avoid leftover metadata causing issues.
Give it another try, and if you get stuck - please supply the details of your system along with your question.
Here is an article we wrote for a similar environment:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
Albeit it is iSCSI focused, the principal concepts are the same.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox