vmid.conf to drbd

basanisi

Renowned Member
Apr 15, 2011
40
2
73
Hello everybody,

I create a Proxmox ve 2.0, 2 cluster machive by following this tutorial http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster, all works perfectly well.

My only problem reside in this phrase "For this testing configuration, two DRBD resources were created, one for VM images an another one for VMs users data. Thanks to DRBD (if properly configured), a mirror raid is created through the network (be aware that, although possible, using WANs would mean high latencies). As VMs and data is replicated synchronously in both nodes, if one of them fails, it will be possible to restart "dead" machines on the other node without data loss."

How to copy vmid.conf witch reside by default to /etc/pve/qemu-server to the drbd i create to assume HA.

Thanks for your answers
 
Hello everybody,

I create a Proxmox ve 2.0, 2 cluster machive by following this tutorial http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster, all works perfectly well.

My only problem reside in this phrase "For this testing configuration, two DRBD resources were created, one for VM images an another one for VMs users data. Thanks to DRBD (if properly configured), a mirror raid is created through the network (be aware that, although possible, using WANs would mean high latencies). As VMs and data is replicated synchronously in both nodes, if one of them fails, it will be possible to restart "dead" machines on the other node without data loss."

How to copy vmid.conf witch reside by default to /etc/pve/qemu-server to the drbd i create to assume HA.

Thanks for your answers
Hoops, not reading right - this was not the question, but an good sugguestion ;)

Hi,
this means you use not one drbd-resource (device) but two (r0 + r1). Then you define two volumegroups and two storages, like "a_sata" and "b_sata". One for node a and one for node b (both nodes see both storages, but node a have only on a_sata VMs running and vice versa).
If you get an split-brain condition you can easily overwrite the data of the node, which don't have active VMs on this drbd-resource.

Udo
 
hello,

Exactly, I create 2 drbd lvm device over my hardware raid device, configured with 8 1To SATA in raid10.

For the moment I delete all my VM and restore them one by one.
I can migrate VM from 1 server to another witout problem
When I restart 1 server all VM migrate to the other without problem.
But I don't try to brutaly extinct one server to force split-brain
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!