Making iSCSI storage accessible to all cluster nodes without relying on a single host.

Tycho_

New Member
Oct 20, 2025
1
0
1
Hi everyone,


I currently have a Proxmox cluster with 3 hosts and a NAS acting as an iSCSI target.
The NAS is connected to the hosts through a dedicated switch.


Right now, the iSCSI target is only attached to one node, and the other nodes access it indirectly.
This creates a single point of failure: when that main node goes offline, the storage becomes unavailable for the rest of the cluster.


Current setup:​


  • 3 × Proxmox hosts
  • NAS as iSCSI target
  • Connected through a switch
  • NAS file system: Btrfs
  • Exported via iSCSI → appears in Proxmox as raw
  • Assigned as LVM storage in Proxmox
  • VMs are stored on this LVM, shared across hosts (but dependent on the main host being online)
  • Shared storage required for HA and live migration

What I want to achieve:​


  • Have all nodes independently connect to the NAS iSCSI target.
  • Ensure that the shared storage remains available even if one node goes down.
  • Maintain stable access for HA and VM migrations.

Questions:​


  1. What’s the recommended way to connect the iSCSI target to all nodes directly?
  2. Can a single iSCSI LUN safely be accessed by multiple initiators simultaneously, or would I need a clustered file system (e.g., Ceph, ZFS, OCFS2, etc.)?
  3. Is using Btrfs over iSCSI with LVM safe for multiple hosts, or could it lead to corruption?
  4. Should I consider switching to ZFS for better HA and replication support?
  5. Should I configure multipath (MPIO) for redundancy, and if so, what’s the recommended approach?
  6. Are there any known caveats or gotchas with this type of setup?

Any real-world examples, configuration tips, or best practices would be highly appreciated.
I want to eliminate the single point of failure and make the storage setup more robust for HA workloads.


Thanks in advance!
 
Hi @Tycho_ , welcome to the forum.

What I want to achieve:​


  • Have all nodes independently connect to the NAS iSCSI target.
  • Ensure that the shared storage remains available even if one node goes down.
  • Maintain stable access for HA and VM migrations.
There are many guides out there about connecting iSCSI storage to a PVE cluster. For example, https://www.youtube.com/watch?v=um31y0qVkLk

What’s the recommended way to connect the iSCSI target to all nodes directly?
You can use PVE GUI to create iSCSI storage. There are other approaches that depend on your network arrangement:
https://kb.blockbridge.com/technote...nderstand-multipath-reporting-for-your-device
  1. Can a single iSCSI LUN safely be accessed by multiple initiators simultaneously, or would I need a clustered file system (e.g., Ceph, ZFS, OCFS2, etc.)?
It can be, but not the same logical blocks/areas of the LUN. This is why you use LVM which slices the LUN into several independent pieces.
Is using Btrfs over iSCSI with LVM safe for multiple hosts, or could it lead to corruption?
If you mean using Brtfs as a backend filesystem - it makes no difference whether its ZFS or Brtfs to the initiator. It is completely abstracted from the initiator. Whether your NAS creates a file or a ZFS slice to export as iSCSI is not important.
  1. Should I consider switching to ZFS for better HA and replication support?
Completely up to you.
  1. Should I configure multipath (MPIO) for redundancy, and if so, what’s the recommended approach?
Is this a homelab or a business? In either case, its really up to you whether to have Multipath. If its a business and you want MP, the recommended approach is to have multiple switches, multiple network cards and multiple heads on the NAS.
  1. Are there any known caveats or gotchas with this type of setup?
People make careers managing storage and being experts at it. There are caveats to every aspect of the infrastructure in one way or the other.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Johannes S