cLVM

udo

Distinguished Member
Apr 22, 2009
5,975
196
163
Ahrensburg; Germany
Hi all,
if i understand the doc about clvm right, the changes between lvm and clvm are "only" on the nodes (clvmd, lvm.conf), not on the lvm-storage?!

I ask because can i use lvm-storage with content (from pve-1.x) without changes on a pve2-cluster? Or must be something converted on the lvm-disks?

Udo
 
Second question about cLVM:
at a cLVM Page http://docs.redhat.com/docs/en-US/R...ager_Administration/LVM_Cluster_Overview.html I found this warning:
Code:
When you create volume groups with CLVM on shared storage, you must ensure that all nodes in the cluster have access to the physical volumes that constitute the volume group. Asymmetric cluster configurations in which some nodes have access to the storage and others do not are not supported.
How does this fit for an 3 (or more) node cluster with different storages and DRBD-Storage between two nodes of the cluster?
If i remeber right, it's should be possible to use different sorages on cluster-nodes on pve2.x?!

Udo
 
if i understand the doc about clvm right, the changes between lvm and clvm are "only" on the nodes (clvmd, lvm.conf), not on the lvm-storage?!

Yes. Basically the change is that all lvm command tries to acquire a lock using dlm.
 
How does this fit for an 3 (or more) node cluster with different storages and DRBD-Storage between two nodes of the cluster?
If i remeber right, it's should be possible to use different sorages on cluster-nodes on pve2.x?!

Well, the proxmox storage model is more flexible now, but we can't work around all limit in other software. I guess it is best to use only 2 nodes if you use DRBD.

Besides, Martin did some test on a 5 node cluster (with DRBD on 2 nodes). So far he did not found serious errors/bugs. I guess the only problem is that you can't activate/deactivate LVM volumes on all nodes (-ay), or exclusively (-aey). But we do not use such commands at all (we only use -aly)
So feel free to test and report results back ;-)
 
But we do not use such commands at all (we only use -aly)

Well, lvm snapshot tries to acquire an exclusive lock, so I guess they will not work in such setup (not sure if we can use lvm host tags to avoid that?)
 
Hmm,
perhaps helps the old dirty trick with dummy-vgs on the non-drbd-nodes to solve the problem?

Just a wild guess here but I think using dummy vgs on the non-drbd nodes might cause an issue with cLVM since it expects all nodes to see the same data. The warning you pointed to earlier is pretty clear.

With 1.X I have always run just two node clusters when using DRBD, the pain is so many places to manage things.
With 2.0 I was hoping that we could do something better but so far it is not looking good.

Maybe an interface that uses the API to mange many two-node DRBD clusters along with clusters of other configurations is the best solution to this problem.
 
Maybe an interface that uses the API to mange many two-node DRBD clusters along with clusters of other configurations is the best solution to this problem.

What? You talk about creating many DRBD devices (one for each vm volume)?
 
What? You talk about creating many DRBD devices (one for each vm volume)?

No, would use DRBD like the wiki suggests.
CLVM requires all nodes to have access to the data, cLVM documentation specifically states:
"Asymmmetric cluster configurations in which some nodes have access to the storage and others do not are not supported. "
I do not feel comfortable putting data at risk by ignoring the warning from the cLVM docs.

When using DRBD only two nodes have access to the data, that limits you to having only two servers in a given Proxmox cluster when using DRBD and cLVM.

Having an central web interface that utilized the Proxmox API to manage many separate Proxmox clusters would be a good solution to this cLVM/DRBD limitation.
 
Having an central web interface that utilized the Proxmox API to manage many separate Proxmox clusters would be a good solution to this cLVM/DRBD limitation.

Sure, that is the plan anyways (enable the GUI to manage more than one cluster) - but we first need to get the basics working.
 
Sure, that is the plan anyways (enable the GUI to manage more than one cluster) - but we first need to get the basics working.

I understand the GUI needs completed before such advanced features can be implemented, I am glad to hear this is part of the overall plan.

I feel utilizing this future feature would be the best way to handle DRBD and cLVM.
The two DRBD nodes can be their own isolated cluster but still managed by a larger cluster.
This way Proxmox does not need to do anything special because someone chooses to utilize some feature that has limitations.
 
Does drbd people have some suggested solution for running a cluster with more than 2 nodes? Seems they do not really think cLVM is required.

Our current code in PVE 2.0 does its own locking anyways (additionally), so maybe that is good enough (as good as the 1.X implementation).
 
I agree that Proxmox does its own locking and maybe that should be good enough.

However, that is not what the documentation on DRUD or cLVM says as udo pointed out above:
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/LVM_Cluster_Overview.html

With 1.x we have already had issues because we did not use cLVM as suggested, such as the mysterious split-brain that happens when running vzdump sometimes.
I choose to ignore this issue since it was rare and only a minor nuisance when it happened and I have monitoring setup to alert me when it happens so I can deal with it rather than be surprised when I try to live migrate some day and it fails.

I was very happy to see cLVM added to Proxmox 2.0 because that is the proper way to use LVM on top of DRBD.
It is my opinion that we should follow the recommended way to use these advanced technologies and that means if we use LVM on top of DRBD we should only have two nodes in a given Proxmox cluster.

When the 2.0 GUI supports adding and managing multiple clusters from one GUI I do not see this limitation of having only two DRBD nodes in a given Proxmox cluster as being a problem.
We should simply wait for the GUI to support the feature of managing multiple clusters from one GUI if we need to manage more than two nodes and want to use DRBD.
 
Last edited:
Well, I just wonder what the drbd maintainers suggest - maybe someone can ask on the drbd list (active/active, clvm, more than 2 nodes).
 
Please note that we can also setup DRBD in active/passive mode (then there is no need for clvm at all).
 
What? You talk about creating many DRBD devices (one for each vm volume)?
Yes, I would like that. It makes it easier to have a 3+ node cluster. It makes it easier to recover from a split brain if one wants to use all nodes in daily operation and not just have some of them on hotspare standby.
 
Yes, I would like that. It makes it easier to have a 3+ node cluster. It makes it easier to recover from a split brain if one wants to use all nodes in daily operation and not just have some of them on hotspare standby.
It also makes it easier to have some machines running on fast storage, and other machines on slower but cheaper storage.
 
Yes, I would like that. It makes it easier to have a 3+ node cluster. It makes it easier to recover from a split brain if one wants to use all nodes in daily operation and not just have some of them on hotspare standby.

To dynamically create DRBD volumes we would need to use DRBD on top of LVM.
If I also wanted snapshot backups of the VM on DRBD we would then need to snapshot the LVM under DRBD which is not how vzdump/qmrestore is designed to work.

To solve the hot-stand by issue I always setup two DRBD volumes, then have half the VMs on node A with DRBD0 and the other half on Node B with DRBD1.
Very simple solution, not confusing or complex, only requires a little bit of planning and makes split-brain recovery simple.

I do not like the idea of adding additional complexity just to solve a problem that is already easily solved.
Your idea has certainly got me thinking, maybe we do need to think about this more to come up with the best solution.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!