2 nodes cluster with drbd with 2 resources - possible?

pdrolet

New Member
Jan 13, 2013
5
0
1
Hi,

I already configured a 2 nodes cluster (drbd) with a quorum disk and this works flawlessly. But I am having difficulty with the following configuration...

I would like to optimize the hardware usage. Here is what I have in mind.

Hardware descr:
2 servers : 32GB ram, 4 volumes, 32 cores each.

I configured 2 drbd (I installed v 8.4.2 because multiple volumes) resources on lvm. Here are my files r1.res and r2.res. r1 has 3 volumes and r2 has one.

resource r1 {
protocol C;
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
become-primary-on both;
}
net {
cram-hmac-alg sha1;
shared-secret "my-secret";
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
volume 0 {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;
}
volume 1 {
device /dev/drbd1;
disk /dev/sdc1;
meta-disk internal;
}
volume 2 {
device /dev/drbd2;
disk /dev/sdd1;
meta-disk internal;
}
on vmlidi1 {
address 10.0.7.105:7788;
}
on vmlidi2 {
address 10.0.7.106:7788;
}
}

resource r2 {
protocol C;
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
become-primary-on both;
}
handlers {
}
net {
cram-hmac-alg sha1;
shared-secret "my-secret";
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
volume 0 {
device /dev/drbd3;
disk /dev/sdb2;
meta-disk internal;
}
on vmlidi1 {
address 10.0.7.105:7789;
}
on vmlidi2 {
address 10.0.7.106:7789;
}
}



Is it possible to activate a ha cluster with this setup? Keep in mind that I would like to have vm running on both servers (on the volumes managed by r1 for one and on the volume managed by r2 on the other one). If one server is down, than all the vm would be running on the same server. Doing so would allow to use more of the cores and ram of those 2 servers.

Thanks for you help!

Patrice
 
Hello!

No answer to my initial post... I guess I need to make my post clearer.

So I would like to continue my evaluation of proxmox. If it cannot do what I need it to do, I will have to look for another solution. But I would love proxmox to do this - I would prefer to give my money to this open source project than to a commercial closed product.

Here is what I did yesterday.

On my HA cluster with 2 nodes,
node 2 was the active server running 2 vm (100 and 101 - on the drbd volume of r1). I restored a vm (102) on node 1 with qmrestore to create a new vm. After restore, it ran on node 1 while my 2 other vm ran on node 2. Very nice. Exactly what I wanted.

But then, I stopped rgmanager on node 1. The vm on this node moved to node 2 automatically (all 3 vm are configured in cluster. conf to restart the vm). Now I have all the 3 vm running on node 2. So far so good.

But now there is no way I can reactivate only vm 102 on node 1. I restarted rgmanager on node 1. As expected, nothing happened. I can now only move "en block" all the vms from one node to the other. I guess I would need another configuration of the cluster file.

Here is my cluster.conf file:
<?xml version="1.0"?>
<cluster config_version="5" name="mytestcluster">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
<clusternodes>
<clusternode name="vmlidi1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="off" name="fenceNodeA"/>
</method>
</fence>
</clusternode>
<clusternode name="vmlidi2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="off" name="fenceNodeB"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="192.168.1.17" lanplus="1" login="XXXXX" name="fenceNodeA" passwd="XXXXXX"/>
<fencedevice agent="fence_ipmilan" ipaddr="192.168.1.18" lanplus="1" login="XXXXX" name="fenceNodeB" passwd="XXXXXX"/>
</fencedevices>
<rm>
<pvevm autostart="1" vmid="100"/>
<pvevm autostart="1" vmid="101"/>
<pvevm autostart="1" vmid="102"/>
</rm>
</cluster>

I am planning to add a quorum disk next week (I ordered a tiny linux box that should be ok for the job). But I doubt this would make any difference.

Thanks for your help,

Patrice
 
To complete this post, I removed the vm 102 from the cluster.conf file. Now I have only
...
<rm>

<pvevm autostart="1" vmid="100"/>
<pvevm autostart="1" vmid="101"/>
</rm>

And I can manually migrate 102 to node 1. This takes only a few second since the vg is synchronized by drbd. But then, this is not HA... If for instance the raid card breaks on one node, I cannot live migrate the vm manually, for I would not have access to the node at all!

I found this excellent tutorial :
https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial

Can I use this to make proxmox work as I need it? There is no pvevm line in the example given, but it seems to me that what is demonstrated is exactly what I need.

In the wiki about proxmox HA with DRBD, there is no mention to make the volume groups cluster enabled (with the -c with). I must admit that HA it is a lot simple as demonstrated it the wiki than in the tutorial mentioned above, but maybe simplicity hid some more powerful functions of clustering still completely compatible with proxmox and DRBD. Is it completely compatible? Do I need to use a pvevm tag?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!