Help Needed: Shared iSCSI Storage for PVE Cluster (Multi-node Access)

jiantao

New Member
Jun 13, 2025
3
0
1
Hi everyone,

I'm building a Proxmox VE cluster and currently looking for a reliable **shared storage solution** that all nodes in the cluster can access concurrently.

I'm considering using **iSCSI as backend storage**, and I would like to know:

1. Is there any recommended and **stable setup** for using iSCSI as a **shared VM storage** across the whole PVE cluster?
2. How can I properly configure **multi-node access** to an iSCSI target without risking LVM or file system corruption?
3. Should I use **LVM over iSCSI**, **LVM-thin**, or **ZFS over iSCSI**? Any pros and cons?
4. Is there a preferred open-source iSCSI target solution (e.g., **TGT**, **LIO**, **FreeNAS**, **TrueNAS**, **Windows Server**) that works well with PVE?
5. Can Proxmox natively manage iSCSI multipath and ensure **HA failover support**?

My goal is to allow all PVE nodes to:
- Access and use the same storage volume,
- Migrate VMs between nodes with shared disks,
- Avoid file locking or data corruption issues.

If anyone has experience with a working configuration (e.g., multipath, LVM with shared VG, or ZFS over iSCSI), I would really appreciate your advice or a sample config.

Thanks in advance for any help!

Best regards,
 
Last edited:
Thanks
Is it feasible and risky to connect all 5 PVE nodes to a single block of ISCSI storage and then create the same ZPF pool on each of them
 
No, it is not feasible.

You may want to reach this KB article : https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I agreed with the author. On one of the sites has a cluster of 3 servers HPE DL360 Gen10 with a storage HPE MSA2050. The servers are connected to the storage via a FC. A "Thick" LVM has been created on top of the lun's and is shared between all hosts. Everything works fine.

1749816088583.png

P.S. Don't forget that FC/iSCSI storage is a "single point of failure" and for large production environments it is better to use Ceph.

P.P.S The screenshot shows a cluster for my developers team =)
 
Last edited:
It seems like this question is asked daily. I dont know who to @mention but it might be a good idea to post a sticky.

1. Is there any recommended and **stable setup** for using iSCSI as a **shared VM storage** across the whole PVE cluster?
yes. see https://pve.proxmox.com/wiki/ISCSI_Multipath for multipathing setup. As for storage pool setup you have 2 options (3 if you include CFS, but this wouldnt be supported by anyone). Before continuing to discussion of options, please review https://pve.proxmox.com/wiki/Storage for a complete discussion of what the PVE toolset provides built-in support for.
option 1: use a LUN as a storage pool. per the user manual:
https://pve.proxmox.com/wiki/Storage:_iSCSI said:
iSCSI is a block level type storage, and provides no management interface. So it is usually best to export one big LUN, and setup LVM on top of that LUN. You can then use the LVM plugin to manage the storage on that iSCSI LUN.
note that only LVM-Thick is supported for this method, so no snapshots would be available.
option 2: use a LUN as a guest mapped disk. This method is potentially more performant then LVM since it wouldnt be subject to multiple initiator contention (when used for multiple VM disks) but the greatest benefit is its single guest scope. This makes it practical to use hardware level snapshots for storage that makes them available with only one guest to manage quiescence, but it will require user provided snapshot orchestration. The downside is since there is no synchronization between pve and the storage, LUN administration has to be performed manually for each disk- its a pain and has greater potential for mistakes.
option 3: use a CFS (Cluster aware filesystem.) there are two known options that "work" on debian, which are gfs2 and ocfs2. Neither is well supported, and there is no integration or orchestration built into pve. In effect, you would provide management and mount control of the cfs on your nodes, and use the "Directory" method to provide a storage pool.

2. How can I properly configure **multi-node access** to an iSCSI target without risking LVM or file system corruption?
all the above options provide cluster safe means of attachment.

3. Should I use **LVM over iSCSI**, **LVM-thin**, or **ZFS over iSCSI**? Any pros and cons?
As mentioned above, if you are using option 1 you'd use LVM over iSCSI. lvm thin cannot be used. ZFS over iSCSI is a special case that applies to storage solutions that use ZFS as their backing store AND provide an API that has a plugin provided by PVE (ssh+zfs tooling)
https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI said:
The following iSCSI target implementations are supported:
  • LIO (Linux)
  • IET (Linux)
  • ISTGT (FreeBSD)
  • Comstar (Solaris)

What this option allows is to use an api exposed by your storage to control zvol creation, snapshots, etc, which effectively provides integration for an option 2 type deployment.
If you dont know what type of target your storage is using, this option is likely not viable. If you are sufficiently motivated, its possible to write your own plugin for a given storage solution- there are examples on the forums of people who have done just that.

4. Is there a preferred open-source iSCSI target solution (e.g., **TGT**, **LIO**, **FreeNAS**, **TrueNAS**, **Windows Server**) that works well with PVE?
"Preferred" is a loaded word. The simple answer is no, but its really subject to what the operator's usecase and expectations are. Just about any solution that exposes a target that has a plugin (see 3) AND provides ssh access with sufficient permissions to issue zfs and target commands would work. Windows Server isnt open source, nor is there what I would consider a supportable zfs option for Windows.

5. Can Proxmox natively manage iSCSI multipath and ensure **HA failover support**?
Multipathing is handled by a daemon (multipathd) which is generic linux software. There is no C&C integration into PVE as far as I know. see https://manpages.debian.org/bookworm/multipath-tools/multipathd.8.en.html for more information.
 
  • Like
Reactions: UdoB
Thank you all. I have an idea. I'm not sure if it's feasible. I'll directly install the PVE system on the shared storage and start it, then add it to Ceph to serve as the storage server for all nodes. Is this plan feasible?