[SOLVED] External Storage Server for 2 Node Cluster

LG-ITI

New Member
Jan 5, 2026
2
0
1
I've recently added another Proxmox Host and a Q-Device to my Environment in order to get some redundancy implemented.
However, I've now noticed that my current Storage solution doesn't seem to be suited to achieve the goals that I want it to achieve.

I currently have a Linux Storage Server, on which I've setup a ZFS-Pool. This ZFS Pool was connected to the Proxmox Host by ZFS-over-iSCSI.

Now with the second Proxmox Host I was hoping that I could use this storage as a shared storage for the cluster so that one Host could automatically take over running the VMs when the other Host fails, but as I understand it now, ZFS isn't suited to this usage.

So my question is what other Storage solutions could I use instead of ZFS over iSCSI?

If possible I would like my Storage solution to support automatic Failover between the 2 Proxmox Hosts. If automatic failover isn't possible downtime should still be kept to a minimum.
I would also like to be able to manage snapshots within the Proxmox Environment.

Ceph only seems to be possible with atleast 3 Nodes and only really make sense with atleast 4. As I'm not planning on adding any more Nodes I would avoid it.

Is there a good Shared Storage solution for my goals or should I add additional local storage to my Proxmox hosts and setup ZFS Replication between them?
 
Last edited:
Hi @LG-ITI , welcome to the forum.

First, let’s make sure we are aligned on what “ZFS-over-iSCSI” means: This approach allows you to programmatically expose virtual disks as iSCSI LUNs. These virtual disks are backed by ZFS. The consumers of these LUNs (the VMs) are not aware that ZFS is involved; from their perspective, it is simply raw iSCSI storage.

ZFS itself is not a cluster-aware (shared) filesystem. However, in this use case, only the ZFS volume management layer is being used. The resulting volumes are exposed to hosts one at a time, i.e. there is no concurrent multi-access. As such, the “ZFS-over-iSCSI” scheme is compatible with a PVE cluster and supports automatic failover.

PVE ZFS replication is a different type of approach. It relies on ZFS volumes that are local to each node, combined with ZFS replication between nodes.

You can use either approach, and each comes with its own advantages and trade-offs. From a business and availability perspective, the primary concern with your ZFS-over-iSCSI design is the single point of failure introduced by the Linux server providing the iSCSI service.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I would also like to be able to manage snapshots within the Proxmox Environment.
There are two ways to accomplish what you're after (three if you include zfs replication but that doesnt use the external storage device.)
1- zfs over iscsi- as @bbgeek17 explained.
2. qcow2 over nfs- install nfsd, map the dataset into exports, mount on the guests.

option 1 will perform better, but is more sensitive and complex. Bear in mind that in either case you are wholly dependent on your storage server for the entirety of the cluster function- connectivity fault, system updates, etc will take out the entire cluster.
 
  • Like
Reactions: LG-ITI and UdoB
Thank you for the replies! I guess I misunderstood how ZFS over iSCSI actually functions.

In the Future, I will probably only use the Linux Storage Server for non-critical VMs so I'm fine with it being a single point of failure.