DRBD and Proxmox (and OpenVZ?)

hk@

Renowned Member
Feb 10, 2010
247
7
83
Vienna
kapper.net
Hi,
I know everyone likes KVM and there I read drbd does a good job for "hot" migrations, while we love OpenVZ and have to live with non-drbd on Proxmox as I read it.

Now I gathered several questions: http://pve.proxmox.com/wiki/DRBD defines an active/active setup - which linbit seems to discourage http://www.linbit.com/de/training/tech-guides/dual-primary-think-twice/ - I suspect the recent chatter about lvm giving the drbd-setup a split-brain could be well related to this as I see no cluster-fencing or similar for this setup in the wiki.

I then kept on reading linbit documents like http://www.linbit.com/de/training/tech-guides/highly-available-openvz-with-nfs-and-pacemaker/ and the openvz-wiki also supplies (a rather old) guide for HA using drbd http://wiki.openvz.org/HA_cluster_with_DRBD_and_Heartbeat

Now, to make a wish, please consider OpenVZ also to be stored at least on a shared NFS (or better yet clustered-lvm, or even a shared directory so one could build).

And I believe it would be incredibly cool to be able to define eg. one cluster-member-server as a hot-standby-box, where also config-files would be available always (place the config files on shared storage too?) and one could restart VEs available on shared storage to get them up and running again while the failed cluster-node is being fixed.

Any suggestions on how to get OpenVZ-VEs in proxmox to at least fast-migration using some sort of shared storage are greatly appreciated.

Regards
hk
 
Hi HK,
I have an Openvz active passive HA solution up and running for 24 hours.
I'm using drbd (NOT ACTIVE ACTIVE). with heartbeat the container and the configuration located on the shared drbd disk.
The heartbeat is mounting the drbd disk and starting the VZ service, because the machine is marked to start at boot its start as soon as one of nodes failed.

It takes something like 10 seconds to fail over.

Thanks
David.
 
Hi,

we're running a setup allowing "hot migrations" for kvm and also OpenVZ - we created 2 DRBD devices (active/active, for 2 nodes) as suggested here and afaik also in wiki (which basically eliminates the split-brain-risk), and additionally each node gets a LV mounted below /var/lib/vz. In case of "emergency", this LV can be easily mounted on the other node, similar to kvm LVs.
Of course we had to add little cron jobs to sync the config files, and currently the fail-over procedure is depending on some scripts to be "triggered" manually. That's ok for our purposes, though a uniform handling through the web frontend would be much more convenient of course.
Adding automatic fail-over stuff should not be that complicated - but so far that's missing also for kvm anyway...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!