logic cluster of 2 servers without a single repository

AlexZ

New Member
Oct 4, 2013
2
0
1
Welcome
need help in understanding the logic of the correct configuration for the binding of 2 servers in the cluster using drbd and / or LVM.

try to build the same type 2 servers in a single cluster for failover, but does not work

fault-tolerant scheme in 2 versions I tested:

Option 1:
There are 2 servers with the same configuration, and certain unallocated partition (the way it should be?), which I do first device / dev/drbd0.
then create a volume group drbblvm, to which is added
create a cluster LVM storage, choose a group drbblvm.
but the record there is nothing I can (no images, no templates, neither the virtual machine.)

option 2
I have a server with the same partition, but this section was initially LVM, and then I added it to the device / dev/drbd0. It create the file system and then mount it to a folder ...

in this version, I can do in this section, the virtual machine, save images ...
but migration of virtual servers, causing its destruction.
the way it should be?

tell me how to implement this scheme.
 
tell me how to implement this scheme.

You must use the option 2 but not mount it, in PVE GUI add the storage drbd for the two PVE Nodes and select the options that you want ... images, iso, templates, etc
 
I have tried to do so, but did not get resiliency ...
in the case of manual transfer - the virtual machine is destroyed (erased its files), and it will not start.
in case of emergency destruction server node1 - the virtual machine does not start (although in the HA indicated), and also in some cases destroyed ...

by this I asked this question ...
what wrong?
 
I have tried to do so, but did not get resiliency ...
in the case of manual transfer - the virtual machine is destroyed (erased its files), and it will not start.
in case of emergency destruction server node1 - the virtual machine does not start (although in the HA indicated), and also in some cases destroyed ...

by this I asked this question ...
what wrong?

In this forum has treated this subject hundreds of times, I suggest that you check similar topics in the past

But my suggestion is to have two DRBD volumes, each originally to be used in each Host PVE with his VMs corresponding, if so, it will be easier to have Online Migration in both PVE Nodes or to have HA if a node goes down.

On the other hand, if fail of any replication DRBD volume, with this mode you can synchronize the volume DRBD failed without this affecting the VMs that are running on the other node PVE.that are using the other volume DRBD (easy repair and away from hazards)

This technique works for me for many years with excellent results

Suggestion for best results Replication:
1- NICs must be connected in mode NIC-to-NIC, without Switch in the middle (avoid another point of failure and also works better)
2- Use 2 NICs for each DRBD volume with the same brand and model, after must do two bonding balance-rr (in pairs of NICs for each DRBD volumen, so you will get the double of speed of network replication for each DRBD volume)
Then, if i have 2 DRBD volumes, i will use 4 NICs exclusively for this function

General Suggestion:
Never use the NICs of brand Realtek, this brand do network link down several times per day, therefore it is forbidden to use in production environments

Best regards
Cesar
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!