iSCSI Shared Storage Configuration for 3-Node Proxmox Cluster

Jochem

New Member
Oct 20, 2025
5
2
3
Hi I'm trying to configure shared iSCSI storage for my 3-node Proxmox cluster. I need all three hosts to access the same iSCSI storage simultaneously for VM redundancy and high availability.
I've tested several storage configurations:
  • ZFS
  • LVM
  • LVM-Thin
  • ZFS share

Current Issue

With the ZFS share approach, I managed to get the storage working and accessible from multiple hosts. However, there's a critical problem:
  • When the iSCSI target is connected to Host 1, and Host 1 shares the storage via ZFS
  • If Host 1 goes down, the iSCSI storage becomes unavailable to the other nodes
  • This defeats the purpose of redundancy, which is exactly what we're trying to achieve

Questions

  1. Is this the correct approach? Should I be connecting the iSCSI target to a single host and sharing it, or should each host connect directly to the iSCSI target?
    If each host should connect directly: How do I properly configure this in Proxmox?
  2. What about Multipath? I've read references to multipath configurations. Is this the proper solution for my use case?
  3. Shared Storage Best Practices: What is the recommended way to configure iSCSI storage for a Proxmox cluster where:
    • All nodes need simultaneous read/write access
    • Storage must remain available even if one node fails
    • VMs can be migrated between nodes without storage issues
  4. Clustering File Systems: Do I need a cluster-aware filesystem? If a cluster filesystem is required, which one is recommended for this setup?

Additional Information

  • All hosts can reach the iSCSI target on the network
  • Network connectivity is stable
  • Looking for a production-ready solution

Has anyone successfully implemented a similar setup? What storage configuration works best for shared iSCSI storage in a Proxmox cluster?

Any guidance or suggestions would be greatly appreciated!
 
@bbgeek17 i think the common solution would be go for a 3x node hyperconverged solution provided by PVE built-in capability. I think a lot of forum entries are here if you want to go for a "SAN"-ish like "Central" storage solution. bbgeek17 knows more here. I personally think its a "Religious" kind of question, i realized solutions with both approaches and i have to admit both worked and failed often :/ But this likely me - i went to no HA at all and just "Ultrafast Backup & Restore" and put the "HA"_ish part (or better: cascading PBSes) on the backup-side and have a COLD PVE as standby ... (RTO and RPO are not ZERO here i have to admit).

[Virtualization Support for SME and NGOs | DLI Solutions ]
 
  • Like
Reactions: Jochem
Hi Frank, thanks for your quick response!
I understand you've moved away from HA entirely that's actually valuable information.
Let me clarify our requirements and see if anyone has experience with this specific setup.
Our Requirements

We need:

1 iSCSI storage with 2 network adapters for redundancy
2 All 3 Proxmox hosts accessing the same storage simultaneously
3 Ability to backup to the same iSCSI storage (via PBS or built-in backups)
4 True HA: live migration and automatic VM restart on node failure

Is this also achievable in your current setup? Or did you move away from HA specifically because these requirements were difficult/impossible to meet reliably with central storage?
 
  • Like
Reactions: FrankList80
Hi Jochem,

ad 1) for this you need Multipath
ad 2) & 4) yes it is possible, you need to build shared storage on iSCSI but this doesn't support snapshots. This works well.
ad 3) I strongly not recommend to backup to same storage

Don't create LVM, ZFS on iSCSI, but mount iSCSI directly.

I don't recommend to build Ceph from iSCSI drives.
For details about storage https://pve.proxmox.com/wiki/Storage

R.
 
Hi Jochem,

ad 1) for this you need Multipath
ad 2) & 4) yes it is possible, you need to build shared storage on iSCSI but this doesn't support snapshots. This works well.
ad 3) I strongly not recommend to backup to same storage

Don't create LVM, ZFS on iSCSI, but mount iSCSI directly.

I don't recommend to build Ceph from iSCSI drives.
For details about storage https://pve.proxmox.com/wiki/Storage

R.
Hi kosnar,

Thanks for your reply.

When connecting the storage directly via iSCSI, it indeed becomes visible and accessible on all my hosts.
However, when adding the iSCSI target, I only see Raw storage.
Which storage type would you recommend to use in this scenario?
 
Hi Frank, thanks for your quick response!
I understand you've moved away from HA entirely that's actually valuable information.
Let me clarify our requirements and see if anyone has experience with this specific setup.
Our Requirements

We need:

1 iSCSI storage with 2 network adapters for redundancy
2 All 3 Proxmox hosts accessing the same storage simultaneously
3 Ability to backup to the same iSCSI storage (via PBS or built-in backups)
4 True HA: live migration and automatic VM restart on node failure

Is this also achievable in your current setup? Or did you move away from HA specifically because these requirements were difficult/impossible to meet reliably with central storage?
@kosnar Hi .... i think Kosnar gave very valuable input and has more knowledge than i do. I did not operate in the past Shop-floor like Realtime-Brutal "HA" machines i have to admit.

My 2 cent from a lot of years experience: Compute rarely fails (yeah, yeah AWS CTO quotes .... i know) - Storage and Network usually are the culprits. From the RAC world i saw x000K sums of consulting money failed of botched cluster-switches, and i saw also a hell lot of DR tests failed as well. I saw also a lot of Virtzilla HA VMS do not come up out of a trillion reasons (e.g. very large DBs with a lot compute).

Therefore (its just me and my/client workloads): simplicity is key for me and put (if possible) the HA part in the application and not in the infrastructure. HA in the infra is always the most expensive part (logically the most easiest, i know).

If you face internet-facing B2C applications and infra this is something else - i likely not host this on PVE in a 3x Node-Setup but look at what could be build with (EU) Hyperscalers.

P.S.: Architecture and security (Paranoia) wise i could make sense (if the money is there) to think about a 2nd technology stack for achieving HA-ish /FT/DR - you could use VEEAM replication capabilities (imo currently NOT available for PVE) or ZERTO etc. / if one stack is "compromised" you might have the second available - yes, its not an HA-delivering capability as such, but might be worth considering in the big picture.


[Virtualization Support for SME and NGOs | DLI Solutions ]
 
Last edited:
I've tested several storage configurations:
  • ZFS
  • LVM
  • LVM-Thin
  • ZFS share
Hi @Jochem , none of the above except LVM is compatible with Shared Storage application.
All nodes need simultaneous read/write access
The recommended option is LVM, it gives you RW access from all hosts to _different_ portions of the disk.
  1. Clustering File Systems: Do I need a cluster-aware filesystem? If a cluster filesystem is required, which one is recommended for this setup?
There are only two Open Source (Free) Cluster Aware Filesystems available to you. Neither is technically recommended because neither is officially tested or qualified by PVE team. There are 3rd party guides to configuring and installing the CAFs. Whether you can support them in production depends on your skillset and business appetite for risk.

You have iSCSI (Block) storage and you are looking for shared access. This is the table you should refer to https://pve.proxmox.com/wiki/Storage
Given that your choice is LVM in 99% of the cases, you may find this article helpful: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
4 True HA: live migration and automatic VM restart on node failure
Currently PVE does not provide live migration/recovery on node failure. The node is gone, there is simply nothing to recover.
There is no production-ready equivalent to VMware FT.

The VM will be restarted, but without the state prior to failure. As someone else mentioned upthread - application level redundancy would be the best approach here.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi kosnar,

Thanks for your reply.

When connecting the storage directly via iSCSI, it indeed becomes visible and accessible on all my hosts.
However, when adding the iSCSI target, I only see Raw storage.
Which storage type would you recommend to use in this scenario?

I expect we talk about HA....options:
a) You plan to buy new hardware -> consider Proxmox hyperconverged = Ceph
b) You already have hardware and iSCSI is the only option which storage box offers -> mount iSCSI
c) storage box allow another options e.f. NFS -> consider this options

R.
 
You already have hardware and iSCSI is the only option which storage box offers -> mount iSCSI
Technically, you do not "mount iSCSI". Once the sessions are configured/connected, you can place a filesystem on top of the raw block and mount it.
However, as we are discussing PVE - there will be no filesystem. The user should use LVM on top of the block storage, and then feed LVM to PVE.
Multipath may be an additional step, if desired.

https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Jochem
Technically, you do not "mount iSCSI". Once the sessions are configured/connected, you can place a filesystem on top of the raw block and mount it.
However, as we are discussing PVE - there will be no filesystem. The user should use LVM on top of the block storage, and then feed LVM to PVE.
Multipath may be an additional step, if desired.

https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox


I’ve tried this before, but when I power off host 1, the storage also becomes unavailable for hosts 2 and 3. How can I make sure that each host connects directly to the iSCSI storage?
 
I’ve tried this before, but when I power off host 1, the storage also becomes unavailable for hosts 2 and 3. How can I make sure that each host connects directly to the iSCSI storage?
What you have now:
With the ZFS share approach, I managed to get the storage working and accessible from multiple hosts. However, there's a critical problem:
  • When the iSCSI target is connected to Host 1, and Host 1 shares the storage via ZFS
Is not one of the options I proposed. You have not provided details on what exactly you are doing, but it sounds like possibly ZFS/iSCSI scheme.
If thats what you are trying to do - implementing HA is on you.

There are many guides on connecting PVE cluster to iSCSI storage, for example: https://www.youtube.com/watch?v=um31y0qVkLk


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Jochem
Thank you all for the valuable information!

Based on your recommendations, I'll test the following setup:

My plan:

  • Configure multipath for network redundancy (2 network adapters)
  • Connect all 3 Proxmox hosts directly to the iSCSI target
  • Use LVM on top of the iSCSI storage (not ZFS or LVM-Thin)
This should resolve my original issue where Host 1's failure brought down storage access for the entire cluster.

I'll proceed with the multipath + LVM configuration and report back if I encounter any issues.

Thanks again everyone!
 
  • Like
Reactions: Johannes S
you need to build shared storage on iSCSI but this doesn't support snapshots. This works well.
With LVM and PVE 9, you will have snapshots, yet they are thick, so just use it when you need it or provide a lot of space.


Ability to backup to the same iSCSI storage (via PBS or built-in backups)
I'm also using this, yet I always pair it with external PBS replication. Having everything only on one storage is not good and violates the 3-2-1(-1-0) rule.
The easiest way to archieve it is to have PBS running in a VM and have a lot of space attached. If you have storage tiering on the SAN side, you have have a seperate LUN for PBS.
 
  • Like
Reactions: Johannes S