VE_9.0.1 Fiber Channel only 1 of 3 hosts see the storage as active. Status Unknown

mikewich

New Member
Feb 27, 2026
20
1
3
After following 10 different articles including 5 from this site I got 1 node to successfully accept the new FC storage. the other 2 show it there but with a status unknown. when checking the services it says the storage (5T size) is inactive. Strange thing is on the working node, we moved a running VM to it and it works fine. Any ideas? i have spent hours trying to get it to work. Rebuild one of the nodes 3 times now, and no matter what, it will not see it at all. the other saw it for about 3 min. I could run the "multipath -ll" command and it displayed correctly, but 3 min later no more. even after rebooting it, Natha, Zero, Zinch, Neyet!

I am asking if someone has seen this issue and can help me get it working on the other 2 nodes. Thank you all for your time.
 
Hi @mikewich, welcome to the forum. You sound a bit frustrated, I get it, rebuilding nodes is never fun.

To help effectively, we'll need to see the actual system state from each node. A few things worth keeping in mind: FC connectivity and Multipath are standard Linux administration, not PVE-specific. PVE runs on Debian with an Ubuntu-derived kernel, so if your storage vendor has a Linux guide, that's a great reference.

First, verify the kernel sees the disk on each node
Run lsscsi and lsblk on all three nodes and share the output (use CODE tags or attach as text files). Also worth checking dmesg and journalctl for any FC-related messages, especially on the two problem nodes.

Second, ensure that "lsscsi/lsblk" show duplicate devices and proceed with Multipath configuration using any available Linux guide. Your storage vendor is preferred here. The fact that Multipath output changed on the fly may indicate an FC layer issue. "dmesg" is a good place to check for any errors here. Once you have stable MP config, adding "multipath -ll" from each node will assist the forum members in getting the full picture.

Third, Once multipath is stable, you can start with LVM Physical Volume/Volume Group setup. The LVM has to be layered on top of a DM device, not an underlying sdX. If you've run this a few times already during troubleshooting, running wipefs on the volume before trying again will help avoid leftover metadata causing issues.

Give it another try, and if you get stuck - please supply the details of your system along with your question.

Here is an article we wrote for a similar environment: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
Albeit it is iSCSI focused, the principal concepts are the same.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Onslow
The issue we are having is the other 2 nodes multipath-ll doesn't work. They see the multipath volume as it displays under disks, but will not display it correctly, nor allow for lower level tools to access it. I'm kind of at a loss.
 
The issue we are having is the other 2 nodes multipath-ll doesn't work. They see the multipath volume as it displays under disks, but will not display it correctly, nor allow for lower level tools to access it. I'm kind of at a loss.
My recommendations for next steps are unchanged. There is nothing else I can suggest as you continue to avoid providing actual system output.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
other commands either did not return any errors or multipath =ll does nothing so cannot post. :(

lsblk
root@txprx01n02:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 223.1G 0 disk
|-sda1 8:1 0 1007K 0 part
|-sda2 8:2 0 1G 0 part /boot/efi
`-sda3 8:3 0 222.1G 0 part
|-pve-swap 252:0 0 8G 0 lvm [SWAP]
|-pve-root 252:1 0 65.5G 0 lvm /
|-pve-data_tmeta 252:2 0 1.3G 0 lvm
| `-pve-data-tpool 252:4 0 129.9G 0 lvm
| `-pve-data 252:5 0 129.9G 1 lvm
`-pve-data_tdata 252:3 0 129.9G 0 lvm
`-pve-data-tpool 252:4 0 129.9G 0 lvm
`-pve-data 252:5 0 129.9G 1 lvm
sdb 8:16 1 0B 0 disk
sdc 8:32 0 5T 0 disk
sdd 8:48 0 5T 0 disk
sde 8:64 0 5T 0 disk
sdf 8:80 0 5T 0 disk
sdg 8:96 0 5T 0 disk
sdh 8:112 0 5T 0 disk
sdi 8:128 0 5T 0 disk
sdj 8:144 0 5T 0 disk
 
also, we are using pure flash array. i following their instructions exactly as well as several other details from other places.
 
  • Like
Reactions: bbgeek17
other commands either did not return any errors or multipath =ll does nothing so cannot post
This is not enough information.

Given that you are using Pure and their support is known to be very good, perhaps you should reach out to them for help.


Here is a list of things that you need to minimally provide so that forum volunteers can try to assist:

- Proxmox version across each node
- Node names and identification on whether you consider it a working or non-working node
- For EACH node, USING CODE TAGS (</> from the text box menu):
-- lsblk -o NAME,SIZE,TYPE,MODEL,SERIAL
-- lsscsi -v
-- cat /etc/multipath.conf |grep -Ev "^$|#"
-- multipath -ll
-- multipathd -k "show status"
-- multipath -v3 // use SPOILER tag to improve readability
-- /lib/udev/scsi_id --whitelisted --device=/dev/sdX // Run this against all LUNs
-- journalctl -u multipathd // use SPOILER


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
ok we resolved the issue. stupid simple. we had a "-" (dash) and an "_" Underscore in the name. when we redid the DS presented, we removed those and BAM, it all came live. leave it to stupid simple issues.

I call that a bug. LOL