[SOLVED] Proxmox Cluster - only one node see SAS storage

pradelik

New Member
Mar 28, 2025
5
0
1
My setup:
2x DELL R640 and 1x DELL R630
1X DELL ME4024

All servers are connected to ME4024 using one SAS cable per node. I have PVE Cluster with 3 servers but only one can see ME4024.
 

Attachments

  • Zrzut ekranu 2025-03-29 002107.png
    Zrzut ekranu 2025-03-29 002107.png
    22.2 KB · Views: 7
  • Zrzut ekranu 2025-03-29 002129.png
    Zrzut ekranu 2025-03-29 002129.png
    29.1 KB · Views: 7
  • Zrzut ekranu 2025-03-29 002212.png
    Zrzut ekranu 2025-03-29 002212.png
    7.4 KB · Views: 7
  • Zrzut ekranu 2025-03-29 002237.png
    Zrzut ekranu 2025-03-29 002237.png
    7.2 KB · Views: 7
  • Zrzut ekranu 2025-03-29 002410.png
    Zrzut ekranu 2025-03-29 002410.png
    13.6 KB · Views: 6
  • Zrzut ekranu 2025-03-29 002509.png
    Zrzut ekranu 2025-03-29 002509.png
    6.4 KB · Views: 5
  • Zrzut ekranu 2025-03-29 002532.png
    Zrzut ekranu 2025-03-29 002532.png
    9.3 KB · Views: 6
Hi @pradelik , welcome to the forum.

PVE uses Debian userland and an Ubuntu-derived Kernel, so device detection works the same as on any other Linux-based server.

If the Kernel isn’t detecting the disk at all, the issue is most likely hardware-related.

I’m not familiar with ME4, but you may need to check SAS zoning or other SAN configuration settings. Try the basics first:
  • Check cables and connections
  • Swap the cable from node1 to node2 and see if the issue follows the cable
  • Review SAN settings to ensure proper zoning/configuration
Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
On your ME4, are you exporting volumes via iSCSI or SAS volumes? Either way your problem is that you haven't enabled multipath. Without hardware devices only allow one volume owner at a time. Multipath provides access from multiple systems at a time with the implied understanding that those hosts are coordinating so as not to step on each others block access. It's why NFS is way easier to export common storage for a cluster. So just be aware - if you aren't connecting to the same volume via a clustering mechanism then you run the risk of seriously corrupting the data in the volume.
 
On your ME4, are you exporting volumes via iSCSI or SAS volumes? Either way your problem is that you haven't enabled multipath. Without hardware devices only allow one volume owner at a time. Multipath provides access from multiple systems at a time with the implied understanding that those hosts are coordinating so as not to step on each others block access. It's why NFS is way easier to export common storage for a cluster. So just be aware - if you aren't connecting to the same volume via a clustering mechanism then you run the risk of seriously corrupting the data in the volume.
The multipath is used for Single Physical Server use more then one path(FC, iSCSI either SAS) to access same volume, for load balance and path failover for that single server only. If multiple Physical Server needs to coordinating so as not to step on each others block access, that required by implementation of Cluster software. you just can not install multipath software on each Physical Server and then want those Physical Servers can be coordinating to avoid step on each others block access!
 
On the Me4, is the controller not active/passive is it not? Without multipath, the servers can get stuck connected to the passive controller during various pathing events. We used to see that bs on the Equalogic garbage all the time. ME4s can have similar issues.
 
In my opinion muliptah must be configured when I use e.g. 2 cables from one server to ME4 controller.
Maybe I do something wrong when I am mapping servers to a volume but this scenario worked for VMware (3 hosts for 1 LUN).
 
Ok. Success! I recreated hosts group and restart storage controller and now I can see ME4 storage on all host. I have question. How should I initialize it to allow all hosts mount it?
 
Ok. Success! I recreated hosts group and restart storage controller and now I can see ME4 storage on all host. I have question. How should I initialize it to allow all hosts mount it?
It's great, I personally recommend you can create lvm on the top of shared volume, that is a simplest way then you can use it as shared LVM storage, but it limitation is you can't create snapshot for VM on this type storage (It's not supported). if you consider the snapshot is important, you may need create a cluster filesystem on the shared volume, ex. GFS either OCFS, you can reference following discuss about GFS..
And following document FYR for storage types supported by Proxmox VE.
https://pve.proxmox.com/wiki/Storage1743238030780.png
 
Will it visible on node 2 when I create LVM on node1?
Yes, I have experiences for create 5 nodes Proxmox VE Clusters with DellEMC Unity XT380 over Fibre Channel protocol, In that project I was use shared LVM for built-up a shared storage environment.
 
Ok. Success! I recreated hosts group and restart storage controller and now I can see ME4 storage on all host. I have question. How should I initialize it to allow all hosts mount it?
Hi @pradelik , glad you were able to solve this.
You may find this article helpful in your next steps: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox