DRBD/Suggestion

Robstarusa

Active Member
Feb 19, 2009
72
2
28
Instead of making a primary/primary setup for the entire device and place lvm on top of it, why not use LVM block devices as drbd resources?

Then you can have "primary/secondary" on a per vm basis.

Thoughts?

I haven't worked extensively with DRBD, but I had a lot of trouble figuring out how to get my primary/primary reconnected after a power loss (UPS ran out of battery when I wasn't around).
 

mirco.bauer

New Member
Sep 10, 2009
10
0
1
Thats how I am using Proxmox 1.3 and 1.4, the only issue I had is that 1.4 doesn't like symlinks for the DRBD device nodes. Like I created a /dev/drbd/$guest-system -> ../drbd0 but Promox refuses to accept the storage when starting the guest. Passing /dev/drbd0 directly to it works though.
 

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
17,078
490
103
Austria
www.proxmox.com
Instead of making a primary/primary setup for the entire device and place lvm on top of it, why not use LVM block devices as drbd resources?

Because you cant manage that with the proxmox web gui, and you cant make snapshots, and what would be the advantage at all?
 

Robstarusa

Active Member
Feb 19, 2009
72
2
28
I meant this suggestion for future proxmox development.

This is how I imagine it:

Via the GUI:

* Add lvm group with "shared" checkbox

* Create new VM like normal on "shared" storage. lv is created on all nodes with "master" being the machine the vm is created on. All other (N) nodes will become secondary.

* Proxmox VE web interface then adjusts drbd.conf on all nodes -- everyone secondaries to the machine who the vm was created on.

* When doing a migration, swap the primary/secondary status of the two nodes the vm is being transferred between. Have any other secondary node(s) update their master.

Advantages:

* This gives you finer grained control over individual vm's/disk resources. If a cluster member crashes, you don't need to worry about the resources on any other nodes that are up -- only resources on nodes that are "crashed". If you have a LOT of servers with replicated storage this would/should probably make recovery a lot faster.

* I am also not sure if you can have more than a 2 node "primary/primary" setup. I think a primary/slave setup can scale more nodes.

* This should be alot easier to recover split brain from. Right now I haven't found a lot of documentation on recovering primary/primary splitbrain. When it isn't done automatically I know of only one other way to do it. This way includes discarding all data on one of my 2 nodes & resyncing from scratch. I imagine this won't scale well with large numbers of nodes & disk.

I am new to DRBD, so if my assumptions are wrong, I'm open to suggestions.

My primary interests are:

* Granular control of resources, so that if I have to recover a box, I dont need to re-synch a multi TB device (the entire pv versus a single lv)

* Scaling beyond 2 nodes (can primary/primary do this?)

* Quicker recovery in a split brain situation. I have not found good docs on recovering a primary/primary split brain. Primary/secondary recovery seems to have a lot more options in the DRBD options as well as in google :)

Comments, suggestions, and criticisms welcome!:cool::p
 

mirco.bauer

New Member
Sep 10, 2009
10
0
1
Because you cant manage that with the proxmox web gui, and you cant make snapshots, and what would be the advantage at all?

I am using a primary/secondary setup per VM because I can control which VM gets replication and which not, using DRBD at PV or VG level means all writes of all LVs have to be replicated and thus utilize the disks of both systems.
 

docent

Active Member
Jul 23, 2009
91
1
28
Hi, everybody!

I have the next problem.
I'm using pve 1.4. Regularly my DRBD switches to a StandAlong mode. I can't find why still, may be there is a trouble in a network. All VMs on both nodes continue to work, but DRBD doesn't synchronize. In this case I can't use live migration to move all VMs on one node and to resynchronize DRBD. Now I have to shut down all VMs on one node, copy them to another node and then reboot the first node.
On pve 1.3 I used DRBD on LVM and I was able to repare each DRBD disk individualy.
I understand that this method doesn't fit into the new concept of storage, but what should I do to prevent this trouble?
 

docent

Active Member
Jul 23, 2009
91
1
28
It's strange to hear this from the developer of a HA cluster.
Why then all clusters are needed? Enough to find all the causes of failure of servers.
 

udo

Famous Member
Apr 22, 2009
5,918
180
83
Ahrensburg; Germany
It's strange to hear this from the developer of a HA cluster.
Why then all clusters are needed? Enough to find all the causes of failure of servers.

Hmmm,
how should Dietmar know what happens with your machine?? Perhaps he should look in a crystal ball?
I think it's normal that the user track down the failure, so that ist verifiable to find the bug.
 

docent

Active Member
Jul 23, 2009
91
1
28
I'm not asking about why the DRBD on my cluster brokes the link, but how PVE solves this problem.
I suggest how to resolve this problem, but enybody don't hear me.
And more, now, if any node of cluster is down, there is no a possibility to start VMs, that worked on that node.
 
Last edited:

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
17,078
490
103
Austria
www.proxmox.com
I'm not asking about why the DRBD on my cluster brokes the link, but how PVE solves this problem.

If you have a problem with DRBD you should fix that.

I suggest how to resolve this problem, but enybody don't hear me.

So what is the suggestion?

And more, now, if any node of cluster is down, there is no a possibility to start VMs, that worked on that node.

Sorry, I don't really understand your problem. If DRBD does not work you should fix it. Then you can start the VM again?
 

honmanm

New Member
Nov 11, 2009
5
0
1
While the Proxmox VE HA solution is a work in progress, it would be very useful to have a simple procedure for recovering a primary/primary splitbrain.

Especially in this case where we have LVM on DRBD, we know which of the nodes has current data even if DRBD doesn't (though it appears that DRBD *does* know what has changed on each node, so it would be useful if there was some sort of "area-limited resync" in DRBD. i.e. when the split brain is resolved, one system is treated as previous primary for some parts of the DRBD resource, and the other is treated as previous primary for other regions.

While such functionality is in DRBD territory, the management of it fits best in Proxmox VE territory.

However right now we do need that simple, procedure. If anyone can point me to the "raw material" for this, I'll gladly add a page to the PVE wiki. Like the poster above, despite hours of searching I haven't yet found any quality information on resolving splitbrain of a dual-primary system (we have used DRBD in Pri/Sec mode for 3 years, this is my first taste of dual primary mode).
 

tito

New Member
Dec 15, 2009
23
0
1
Argentina
Hi Dietmar,
I'm not sure if it is right to make this question in this place, but following to the #12 comment: do you have an estimate date for the 2.0 (with ha support) release of pve?
Thanks a lot!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!