Is possible use GFS2 and DRBD on Proxmox VE 2.x?

cesarpk

Well-Known Member
Mar 31, 2012
770
3
58
I want to install 2 nodes with Proxmox VE 2.x using DRBD and GFS2 without HA. The goal is that if the active node crashes, manually shut down the active node and start the VM in the second node, so I'll have the data up to date into Virtuals Machines.
Each server with 4 NICs, 2 pair on bonding, one pair for switch and one pair for DRBD in crossover-cable

:confused: Questions:

1- Can support Proxmox VE 2.x GFS2 and DRBD?
2- Is this scenario possible? and obviously with great care for not to corrupt the Data of VMs or FieSystem
3- Or is there any better idea considering the above scenario without using fencing devices?

And the Question more dificult:
4- With DRDB, GFS2, 2 Nodes Proxmox VE 2.x with HA, and fencing devices. Is it possible to have several "VMs" on a first node and others "VMs" on the second node, and in case of fall of any node, will be that "Proxmox" work perfectly booting the VMs that was on the other node?


:D Would really appreciate to anyone who can answer me :D

Best Regards
Cesar
 
Last edited:
I would simply use DRBD with LVM on top. Using GFS2 adds additional complexity, not really needed.
 
Thank you Dietmar for your quick response

Ah forget it!:
I Love Proxmox !!!, , And my sincere congratulations to the technical Proxmox team, looks like a commercial product and not an open source product.

I install and recommend to use Proxmox to everyone.

Questions:
But if i use qcow2, and LVM with ext3 does not synchronize the FS on the other node, then the second node, not will see the new size of file.

Are there problems, inconveniences or performace problems with GFS2?

And the four question in the same scenery:
4- With DRDB, GFS2, 2 Nodes Proxmox VE 2.x with HA and fencing devices. Is it possible to have several VMs on a first node and others VMs on the second node, and in case of fall of any node, will be that Proxmox work perfectly booting the VMs that was on the other node?

Best Regards
Cesar
 
Last edited:
But if i use qcow2, and LVM with ext3 does not synchronize the FS on the other node, then the second node, not will see the new size of file.

Sorry, I do not really understand that setup.

Are there problems, inconveniences or performace problems with GFS2?

For me, such setup is much to complex.

And the four question in the same scenery:
4- With DRDB, GFS2, 2 Nodes Proxmox VE 2.x with HA and fencing devices. Is it possible to have several VMs on a first node and others VMs on the second node, and in case of fall of any node, will be that Proxmox work perfectly booting the VMs that was on the other node?

Not sure what you mean by 'perfectly booting'? Yes, it start the VMs on the other node. But we recommend at least 3 nodes for HA.
 
Thank you Dietmar for your quick response

Ah forget it!:
I Love Proxmox !!!, , And my sincere congratulations to the technical Proxmox team, looks like a commercial product and not an open source product.

I install and recommend to use Proxmox to everyone.

Questions:
But if i use qcow2, and LVM with ext3 does not synchronize the FS on the other node, then the second node, not will see the new size of file.
Hi,
that's very dangerous!! You never should mount an non-cluster-fs on severals nodes at the same time!
Of couse, drbd -> lvm don't work for qcow2-disks. Simply convert to raw and copy this on lvm-storage (with dd).

Udo
 
Thank you Dietmar and udo for their responses
For me, their suggestions are very important for my implementation

Let me explain better with examples
For Example:

Hardware Setup Common:
2 identical servers Dell with 1 Processor Xeon Intel Quad core and extras hard disks for DRBD
with lots of memory RAM
4 Nics in bonding (2 NICs for Switch LAN and 2 NICs for DRBD in crossover-cable)
Extra Hardware:
power switch

Software Setup common:
Proxmox VE 2.x in HA
drbd
gfs2

Implementation:
Node 1: run Win Server 2008 R2 Virtualized, using qcow2 into DRBD, 4 cores assigned
Node 2: run Win Server 2003 R2 Virtualized, using qcow2 into DRBD, 4 cores assigned
The aim is to exploit the processor resources while both nodes are working.

if node 1 is decomposed, Win Server 2008 R2 will start and run automatically in node 2 without problems and processor resource may be slower (which is what I suppose)

Questions:
1- Could you explain well because it is preferable to use 3 proxmox nodes instead of 2, and obviously without DRBD that only works in 2 nodes?
2- Is it possible to prepare a scenario of this type using only 2 nodes?, too complicated to install and configure proxmox with its dependencies?
3- If fencing is to be a problem with only 2 nodes, with proxmox without HA configuration and in case of breakdown on node 1, i have the idea of ​​not using power switch, and manually power off the node 1 and then start "Win Server 2008 R2 Virtualized" on node 2. Is this possible?, too complicated to install and configure?
4- If fencing and GFS2 is to be a problem and can not be used with only 2 nodes, obviously I can only use LVM + EXT3, so a single node can run the two Windows Servers virtualized simultaneously, then after it breaks down the active node, can I easily with just mouse clicks start the two virtual machines on the node that is alive?

Thank you in advance for your attention, time and information.
Best Regards
Cesar
 
Last edited:
Hi,
that's very dangerous!! You never should mount an non-cluster-fs on severals nodes at the same time!
Of couse, drbd -> lvm don't work for qcow2-disks. Simply convert to raw and copy this on lvm-storage (with dd).

Udo

Thank udo for your answer, but since proxmox 1.7, i use lvm and qcow2 without DRBD and problems, Could you explain better?
 
Thank udo for your answer, but since proxmox 1.7, i use lvm and qcow2 without DRBD and problems, Could you explain better?

Hi,
right without drbd you have your disk -> lvm -> filesystem -> qcow-vm-file - but only on one node (all fine).
With drbd (i mean in this case an active/active drbd) you have on both nodes disk -> drbd -> lvm ... and if you take now an normal filesytem which mounted together on both nodes, you have a good chance to destroy data, because both nodes change the filesystem and allocating blocks and don't know from each other.

There are the posibility to use drbd in active/passive - normaly the VM run on node-a, if node-a fail, you can make node-b to primary, mount the filesystem and start the VMs again. But this is more doing manual and not automaticly.

But again, why qcow2 on a filesystem (only snapshot and space are an plus)? Take plain lvm-storage and you have an stable and fast solution.

About 2-node-cluster: If one node fail, the other node lost the quorum and you can't easy move the configs from the lost-node to the running node, because /etc/pve ist read-only (writable only if the node(s) have quorum). You can set the quorum by hand, but better is an third node!

Udo
 
Thank you Udo for answer me
I appreciate your patience for explaining me

Please let me make some inquiries:
1- With 2 nodes is not appropriate configure a quorum?
2- With two nodes and DRBD, what is the recommended setting setup?. (For example: for DRBD Passive/Active or Active/Active, and other settings that must also be taken into account). I want to understand what would be the ideal strategy.
Since I desire that in case of breakdown of a node, manually and easily and with a minimum of steps, start the virtual machines on the other node that is alive.

Please explain to me well. Also I have problems of language, I do not speak English.
Thanks in advance for your time, care and share your knowledge.

Best Regards
Cesar
 
Last edited:
Thank you Udo for answer me
I appreciate your patience for explaining me

Please let me make some inquiries:
1- With 2 nodes is not appropriate configure a quorum?
right - if the network connection between both nodes fail (and the quorum fail too), both nodes think the other node is dead. Which node is right? Impossible to say.
With tree nodes, the remaining two have quorum (on this node /etc/pve is writable) and on a ha-cluster they fence the dead-node - mean switch the node off and on again.
This is done, to be sure that on this node don't run any VM (in other cases it's very dangerous to start a VM on the remaining nodes if the original VM is still alive and wrote to the vm-disk (on shared storage)).
If you have only two nodes, you can get the quorum back on one node with "pvecm expected 1".
2- With two nodes and DRBD, what is the recommended setting setup?. (For example: for DRBD Passive/Active or Active/Active, and other settings that must also be taken into account). I want to understand what would be the ideal strategy.
drbd active/active with plain lvm-storage
Since I desire that in case of breakdown of a node, manually and easily and with a minimum of steps, start the virtual machines on the other node that is alive.

Please explain to me well. Also I have problems of language, I do not speak English.
Thanks in advance for your time, care and share your knowledge.
I'm also not an native speaker.

Udo
Best Regards
Cesar[/quote]
 
Thanks again for your explanation, I understand.

Please can you help me with this question:

i know how configure a setup like this (i read all tutorials of proxmox ve on his website)
- Only two nodes with Proxmox
- 4 NICs in bonding (for the Switch LAN and for DRDB)
- Extras Hard Disks for DRBD
- DRBD
- Without fencing devices

But i don't know:
Since I desire that in case of breakdown of a node, manually and easily and with a minimum of steps, start the virtual machines on the other node that is alive.

how make this setup? and what will be the minimum steps to start the virtual machines on the other node that is alive?

Best Regards
Cesar
 
Last edited:
Thanks again for your explanation Dietmar

Proxmox is just wonderful

Please help me with my consultation:

This is a possible escenary and i know how configure a setup like this (except with GFS2 but I might try it if necessary):
- Only two nodes with "Proxmox VE 2.x" in cluster
- Extra discs for DRBD
- DRBD in mode active/active, LVM, etc. (or would be more useful GFS2 ¿? ¿? ¿?)
- 2 NICs in bonding, 1 pair for Switch LAN and 1 pair for DRBD
- Whitout fencing devices
- Virtual machines with qcow2 (or raw if necessary)

But I do not know how to configure it to be like this:
That in case of breakdown of a node, manually and easily and with a minimum of steps, start the virtuals machines on the other node that is alive.

how make this setup? and what will be the minimum steps?
I read all the tutorials on the website of proxmox about cluster, but I have not seen any that explain if possible what i want and need.
if explain step by step is very long, would appreciate if you tell me the short version!

Best Regards
Cesar
 
Well, this is thread from the past, but as I was interested in same thing and didn't find "ready and usable" answer - I did it for myself and it was quite easy. Dear proxmox developers, while what I did was and ugly hack - it is posible to add to PVE functionality.
First: my setup is: DRBD device with LVM/CLVM on top of it and gfs2 filesystem on logical volume. I'm using 2-node cluster, so I ended up using GFS2 and was looking for lvm-snapshot backups. Main problem was with GFS2 naming and answer was: patch OpenVZ.pm in /usr/share/perl5/PVE/VZDump/OpenVZ.pm.

in function: snapshot
after string:
my $mopts = $di->{fstype} eq 'xfs' ? "-o nouuid" : '';

I've added next things:

Code:
if ($di->{fstype} eq 'gfs2')
    {
                              $mopts = "-o ignore_local_fs,lockproto=lock_nolock";
                              $self->cmd("echo y | gfs2_tool sb $di->{snapdev} table gfstable-$di->{lvmlv}");
    }

that's all, everything works fine now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!