Request: SAS HBA LUN Sharing Between Proxmox Cluster Hosts (Like VMware)

RodolfoRibeiro

New Member
Feb 24, 2026
2
0
1
Hi Proxmox team and community,


In Brazil, a very common virtualization setup is a 2-host cluster with direct SAS storage connection via HBA cards (e.g., LSI/Avago HBAs). This architecture allows both hosts to share the same virtual LUNs from the SAS storage array/enclosure, enabling HA, live migration, and shared storage without expensive FC/iSCSI switches.


VMware vSphere supports this natively with multipath zoning on SAS HBAs—the hosts see the same virtual LUNs as a single datastore. Could Proxmox VE add official support for sharing virtual LUNs via SAS HBA multipathing between cluster nodes? It would be huge for cost-effective SMB/edge deployments here.


Key benefits:


  • Virtual LUNs from direct-attached SAS JBODs/RAID enclosures, multipathed to both hosts.
  • No network storage fabric needed.
  • Battle-tested in production on VMware for years.

Anyone running this as a workaround in Proxmox (manual multipath.conf tweaks)? Native cluster integration would be amazing!


Thanks!
 
Hi @RodolfoRibeiro , welcome to the forum!

This topic is a common discussion point here on the forum. It is generally raised at least once a week, sometimes more.
To address your questions:
- Yes, many people run with this type of infrastructure in production
- Yes, you can and should use Multipath if you have multiple SAS connections per host.
- There are multiple guides both on the forum and outside that can help you, i.e. https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
- If your storage vendor has a Linux connectivity guide - you should use it. PVE is a Debian based solution.
- There are no PVE supported equivalents to VMFS.
- The recommended option is non-thin LVM, which is a Volume Manager, not a Filesystem
- You should not run a two node cluster with PVE. You need at least 3 cluster members. The 3rd node could be a Quorum device https://pve.proxmox.com/wiki/Cluster_Manager


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Just so you understand my current setup:
I have two Dell Technologies servers connected to a Dell PowerVault ME50 storage.
This connection is done through SAS HBA, so there is no IP connection (no iSCSI) between the storage and the servers.
It is a physical connection using HBA cables — a direct cable connection between the hosts and the storage.
I need to share the LUNs between both servers.
I tried using Linux Multipath, but when one of the hosts lost power, the VM became corrupted.
 
Just so you understand my current setup:
I have two Dell Technologies servers connected to a Dell PowerVault ME50 storage.
This connection is done through SAS HBA, so there is no IP connection (no iSCSI) between the storage and the servers.
It is a physical connection using HBA cables — a direct cable connection between the hosts and the storage.
I need to share the LUNs between both servers.
Hi @RodolfoRibeiro , thank you for clarifying. This matches my initial understanding of your situation. SAS and iSCSI are transfer and connectivity protocols. While the article I suggested is using iSCSI as example, once you are beyond basic storage connectivity, the concepts are the same. Specifically Multipath and LVM layer use Linux Kernel Block devices, and are not transfer protocol dependent.

I tried using Linux Multipath, but when one of the hosts lost power, the VM became corrupted.
This is unlikely Multipath related.

While not an exact match, we have shown such possibility here:
https://kb.blockbridge.com/technote/proxmox-qemu-cache-none-qcow2/index.html


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: alexskysilk
I tried using Linux Multipath, but when one of the hosts lost power, the VM became corrupted.
sounds like a cable problem. You normally would connect each controller to each host, so 4 cables in total.

without expensive FC/iSCSI switches.
The iSCSI switches are also called network switches and you don't need them. You can just connect FC as well as ethernet directly as you would with your SAS HBA. I never seen SAS HBA used, because it's just slower as all other solutions. Hardware costs are similar, so no point in using SAS in my book. We use 32 Gb FC or 25 Gb Ethernet directly attached (so without any switches) with ME5 and then multipathed. SAS is 6, 12 or 24 Gb, so ALWAYS slower as any other mentioned technology.
 
You can just connect FC as well as ethernet directly as you would with your SAS HBA. I never seen SAS HBA used, because it's just slower as all other solutions
For the generations of hardware where iscsi and SAS were offered as available SKUs there was no meaningful performance difference- 16G FC simply had more headroom to fill cache. When 25GB iscsi product started shipping, THOSE were faster (even for fc16.) it is theoretically be SASg4 host connected but I dont see anyone asking for that since iscsi (or better yet, nvof) are so much more useful regardless of "speed."