Shared storage

Ced91

New Member
Jul 16, 2025
2
0
1
Hi all,

I am a beginner on Proxmox (I am looking for a solution to replace VMware) and I would need some tips to configure our infrastructure.
We currently have 2 nodes available as well as a storage bay (an old HP P2000), connected by SAS cables to the two nodes (2 cables on each node, for redundancy).
I have already managed to create the cluster, as well as configure the multipath.

My biggest concern is to find the right way to configure storage. All nodes must share the same storage, to host VMS, with the possibility of being able to make snapshots.
I did some tests, but without success (LVM and LVM-Thin).

What would be the best solution? I have heard of GFS2, is this a good solution?

Thank you in advance for your precious help
 
Hi @Ced91 , welcome to the forum.

We currently have 2 nodes available
Two node cluster is not supported for anything beyond home-lab, and even so with many caveats.
You need a 3-node cluster, where the 3rd can be a non-vm-hosting (i.e. a qdevice).

All nodes must share the same storage, to host VMS
You already have this
being able to make snapshots.
This is not possible with PVE8 (stable production-ready). PVE9 may bring this functionality to you.
I did some tests, but without success (LVM and LVM-Thin).
LVM-thin is not multi-host compatible. You must use LVM.
What would be the best solution? I have heard of GFS2, is this a good solution?
Both GFS2 and OCFS are possible, but not endorsed or supported by Proxmox, or this community (actively).
No are they in active development/support by the developers in general, afaik.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Many thanks for your answers.

Two node cluster is not supported for anything beyond home-lab, and even so with many caveats.
You need a 3-node cluster, where the 3rd can be a non-vm-hosting (i.e. a qdevice).
The 2 nodes cluster is temporary. The project is to migrate from our existing vmware infrastructure. I have 4 hosts in total, which I already "converted" 2 of them into proxmox. I will then add the 2 remaining hosts when all the VMs have been migrated. So the final Proxmox cluster will be a 4-nodes cluster.

This is not possible with PVE8 (stable production-ready). PVE9 may bring this functionality to you.
Thank you for that, I'll see if I can already play a little bit with PVE9 BETA1

Both GFS2 and OCFS are possible, but not endorsed or supported by Proxmox, or this community (actively).
No are they in active development/support by the developers in general, afaik.
All right, so I'll use LVM

Again, thank you very much for all these helpful informations :cool:
 
The 2 nodes cluster is temporary. The project is to migrate from our existing vmware infrastructure. I have 4 hosts in total, which I already "converted" 2 of them into proxmox. I will then add the 2 remaining hosts when all the VMs have been migrated. So the final Proxmox cluster will be a 4-nodes cluster.
Hi @Ced91 , the issue with 2-node cluster, or 4-node cluster is that if you have a failure and only half of the nodes come up - you will not have a majority/quorum. You need an odd-node cluster for proper operations, ie 3,5,7,etc.

The clustering technology between VMware and PVE is very different.
There is an option of using a reduced functionality cluster node called QDevice: https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
Doesn’t the P2000 also do 10G iSCSI? That may be a better way than DAS through SAS/FC. That HP gear has some rudimentary snapshot support and presents virtual disks, so you’ll need to do the thin provisioning and snapshot configuration on the controller side, not on the Proxmox side. There are examples of people that write plugins in Perl or Python to have Proxmox storage requests like making LUN and snapshot sent to the controller.

Note that any setup where you have a singular system, it becomes a single point of failure. In many cases, both Dell and HP I have found that the same era hardware the drives are interchangeable (carrier and disk) and you may be able to move the disks into the servers to make a true cluster.
 
Last edited:
with the possibility of being able to make snapshots.
Using an MSA type product with SAS host ports is subject to the same limitations under PVE as all other SANs. You can either map LUNs directly to virtual machines and use hardware snapshots, or install PVE9 beta and let us know how the new snapshot functionality works :)

Doesn’t the P2000 also do 10G iSCSI?
If you buy an iscsi model, yes. but there is no version that has both SAS and iscsi host ports. but as above, it would make no difference.
 
The 2 nodes cluster is temporary. The project is to migrate from our existing vmware infrastructure. I have 4 hosts in total, which I already "converted" 2 of them into proxmox. I will then add the 2 remaining hosts when all the VMs have been migrated. So the final Proxmox cluster will be a 4-nodes cluster.

Even with a four nodes cluster it's recommended to have an external device for quorum (can eben be a small PC or a raspberry). For now you could just setup a small VM in your VMWare cluster for it, this would be more than sufficient:
"We support QDevices for clusters with an even number of nodes and recommend it for 2 node clusters, if they should provide higher availability. For clusters with an odd node count, we currently discourage the use of QDevices. The reason for this is the difference in the votes which the QDevice provides for each cluster type. Even numbered clusters get a single additional vote, which only increases availability, because if the QDevice itself fails, you are in the same position as with no QDevice at all.
On the other hand, with an odd numbered cluster size, the QDevice provides (N-1) votes — where N corresponds to the cluster node count. This alternative behavior makes sense; if it had only one additional vote, the cluster could get into a split-brain situation. This algorithm allows for all nodes but one (and naturally the QDevice itself) to fail. However, there are two drawbacks to this:
  • If the QNet daemon itself fails, no other node may fail or the cluster immediately loses quorum. For example, in a cluster with 15 nodes, 7 could fail before the cluster becomes inquorate. But, if a QDevice is configured here and it itself fails, no single node of the 15 may fail. The QDevice acts almost as a single point of failure in this case.
  • The fact that all but one node plus QDevice may fail sounds promising at first, but this may result in a mass recovery of HA services, which could overload the single remaining node. Furthermore, a Ceph server will stop providing services if only ((N-1)/2) nodes or less remain online.
If you understand the drawbacks and implications, you can decide yourself if you want to use this technology in an odd numbered cluster setup."

https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support

Another thing which might be of interest for you: At the moment ProxmoxVE doesn't support snapshots on a LVM storage, for some usecases you can use instead backups as a workaround: https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Alternatives_to_Snapshots

This missing feature won't be a problem for long though: The recently released beta of Proxmox VE 9 has snapshots on LVM as technology preview so I would expect that beginning with PVE 9 you will have snapshots on LVM/thick too.
 
Last edited:
  • Like
Reactions: UdoB