Which storage solution for 2-Node Proxmox Cluster?

rene.k

New Member
Oct 14, 2025
4
1
3
Hello all,

Currently we are using a 2-Node VMware Cluster with Open-E JovianDSS.

We want to migrate to a 2-Node Proxmox Cluster + QDevice.

Hardware:
CPU: 2x 18 Core Intel Xeon Gold 6154
RAM: 16x 32GB ECC DDR4 SDRAM (512 GB)
Storage: 10x 960GB Samsung PM93A NVMe
4x 10GbE Ethernet
2x 40GbE InfiniBand

This Cluster is for our internal Systems, mostly Windows Server VMs, some Databases and some User VMs. We don't need a scalable cluster, just stable and high available.
We want to write the Data on both hosts simultaneously.


Now my question is, which Storage Solution should we use for a 2-Node Cluster?

In first Place we wanted to use Ceph, but that's not recommended for an 2-Node Cluster.

Is there any similar solution like Ceph for a 2-Node Cluster?

I found Linstor with DRBD. Do you have any experiences with that?


Thanks in advance!
René
 
Hello,

do you happen to have your OpenE-JovianDSS on dedicated storage hardware? I have no experience with the plattform but from what I could google it supports NFS, iscsi and CIFS, they even have a doc file on it: https://www.open-e.com/site_media/d..._for_Proxmox_VE_Best_Practices_Guide_1.00.pdf
And a website (although it's more PR than anything else):
https://www.open-e.com/solutions/op...fficient-virtualization-open-source-solution/
Since under the hood it's Linux with ZFS maybe even ZFS over iscsi would work but I wouldn't bet on it (since normally that needs support by the storage vendor)

Instead of Jovian you could also use any other storage hardware, in any case I would go with one with support for NFS or ISCSI, due to the limitations of shared block storage (snapshot support is still experimental for LVM/thick, no thin provisioning ) under ProxmoxVE. In fact I know of multiple service providers who tend to setup the small custers of their customers with NFS, if their storage hardware allows that. In theory NFS has (due to the overhead of a filesystem) worse performance than block storage but this doesn't matter if it's still good enough. And it's easy to manage and has all features one would like like snapshots (if you use qcow images for the vms). One caveat: NFS isn't really secure (due to it's origins from a more innocent time), so I would run the Storage hardware and network connections to it on it's own dedicated storage network, not used or accessible from anything else. This is a good idea in any case though, also for ZFS Storage replication or Ceph.

From the integrated options the integrated storage replication based on ZFS might fit the bill: Like Ceph it allows you to use use the internal storage of your server nodes thus removing the need for dedicated storage hardware. It allows high-availability but it's not really shared storage since the replication is asynchron: By default the data is synched every 15 minutes, this can be configured from one minute to multiple hours or (afik) even more (I'm not aware how large it can actually get I would however expect anything about a few hours unpractical for most workloads). Depending on your usecase this might enough (like in this success story from the Proxmox website https://www.proxmox.com/en/about/about-us/stories/story/farmacia-nova-da-maia ) or not acceptable, this is on you to decide. If it's not acceptable you will need a real shared storage where (to repeat myself) something NFS or ISCSI-based is propably the best way go.
In any case you will still need a qdevice, a ProxmoxBackupServer would be a good place for it. But basically any Linux capable device able to run Debian will be ok (even a raspberry although most likely you won't do such a thing in a corporate environment ;) ).

And then there are third-party providers, which might be an option too for you. Blockbridge for example provides block storage (at a price) with a storage plugin for ProxmoxVE, thus removing the limitations of LVM/thick. And StarwindsVSAN is a kind of alternative to Ceph which as far I know also works with two nodes. I have no experience however with one of these options, so can't say too much about them. Maybe somebody else can chime in on that?
@bbgeek17 is the Blockbridge representative in this forum, so maybe he might add more on it.

Regards, Johannes.
 
  • Like
Reactions: UdoB