pvesm set {storage name} --shared 1 --is_mountpoint 1
hmm, but if the node on which the "storage proxy VM" is running on dies, the whole cluster is stuck until it is recovered on another node? Or am I missing something?We went another way and created a intermediate storage HA-VM backed by FC and providing ZFS-over-iSCSI to the PVE cluster.
We went another way and created a intermediate storage HA-VM backed by FC and providing ZFS-over-iSCSI to the PVE cluster.
It's up and running. Works like a charm, thank you.As a workaround you could do the following:
With that, VMs could be snapshotted (qcow2).
- Use a clustered file system on the iSCSI LUN (probably GFS2 and not OCFS2 as the latter seems to be more problematic with newer kernels and such)
- Make sure all nodes can mount it on the same mount path
- Define a Directory storage on the mount path and mark it as "shared" and that it should only be used if something is mounted at the path.
pvesm set {storage name} --shared 1 --is_mountpoint 1
That's not an option, then you might as well give every node a seperate lun and run ceph on all the nodes, pretending the 3PAR luns are local disks.We went another way and created a intermediate storage HA-VM backed by FC and providing ZFS-over-iSCSI to the PVE cluster.
Unfortunately, Yes. Yet it was the only viable option, GFS and OCFS2 have their own problems and were not stable in my tests (years ago).hmm, but if the node on which the "storage proxy VM" is running on dies, the whole cluster is stuck until it is recovered on another node? Or am I missing something?
There is also ZFS-HA, which should do some kind of fast failover, yet I haven't tried it yet.What about the chicken and eggs when the hardware node currently running this VM fails? Will those VMs using this storage just stall for some minutes? Or is there a trick to get a faster failover than waiting for the normal HA-restart?