Migrating current VMWare setup to Proxmox

Bromhir

New Member
Jan 26, 2024
2
1
3
We are looking into migrating our current 3 node setup to ProxMox

However, we have our storage all hosted on a FiberChannel connected HP 3PAR,
is there any way to get both shared storage and snapshots working?
 
As a workaround you could do the following:
  • Use a clustered file system on the iSCSI LUN (probably GFS2 and not OCFS2 as the latter seems to be more problematic with newer kernels and such)
  • Make sure all nodes can mount it on the same mount path
  • Define a Directory storage on the mount path and mark it as "shared" and that it should only be used if something is mounted at the path. pvesm set {storage name} --shared 1 --is_mountpoint 1
With that, VMs could be snapshotted (qcow2).
 
Last edited:
We went another way and created a intermediate storage HA-VM backed by FC and providing ZFS-over-iSCSI to the PVE cluster.
 
We went another way and created a intermediate storage HA-VM backed by FC and providing ZFS-over-iSCSI to the PVE cluster.
hmm, but if the node on which the "storage proxy VM" is running on dies, the whole cluster is stuck until it is recovered on another node? Or am I missing something?
 
We went another way and created a intermediate storage HA-VM backed by FC and providing ZFS-over-iSCSI to the PVE cluster.

Cool!

What about the chicken and eggs when the hardware node currently running this VM fails? Will those VMs using this storage just stall for some minutes? Or is there a trick to get a faster failover than waiting for the normal HA-restart?

Just interested in cool use cases... :)
 
As a workaround you could do the following:
  • Use a clustered file system on the iSCSI LUN (probably GFS2 and not OCFS2 as the latter seems to be more problematic with newer kernels and such)
  • Make sure all nodes can mount it on the same mount path
  • Define a Directory storage on the mount path and mark it as "shared" and that it should only be used if something is mounted at the path. pvesm set {storage name} --shared 1 --is_mountpoint 1
With that, VMs could be snapshotted (qcow2).
It's up and running. Works like a charm, thank you.


We went another way and created a intermediate storage HA-VM backed by FC and providing ZFS-over-iSCSI to the PVE cluster.
That's not an option, then you might as well give every node a seperate lun and run ceph on all the nodes, pretending the 3PAR luns are local disks.
 
  • Like
Reactions: aaron
hmm, but if the node on which the "storage proxy VM" is running on dies, the whole cluster is stuck until it is recovered on another node? Or am I missing something?
Unfortunately, Yes. Yet it was the only viable option, GFS and OCFS2 have their own problems and were not stable in my tests (years ago).


What about the chicken and eggs when the hardware node currently running this VM fails? Will those VMs using this storage just stall for some minutes? Or is there a trick to get a faster failover than waiting for the normal HA-restart?
There is also ZFS-HA, which should do some kind of fast failover, yet I haven't tried it yet.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!