DRBD cluster creation problem

dendi

Renowned Member
Nov 17, 2011
126
8
83
Hi,

I have a working 3 nodes cluster, each node updated with enterprise repository.

I followed this howto: http://pve.proxmox.com/wiki/DRBD9

Code:
drbdmanage init 10.12.198.211

You are going to initalize a new drbdmanage cluster.
CAUTION! Note that:
  * Any previous drbdmanage cluster information may be removed
  * Any remaining resources managed by a previous drbdmanage installation
    that still exist on this system will no longer be managed by drbdmanage

Confirm:

  yes/no: yes
  Failed to find logical volume "drbdpool/.drbdctrl_0"
  Failed to find logical volume "drbdpool/.drbdctrl_1"
  Logical volume ".drbdctrl_0" created.
  Logical volume ".drbdctrl_1" created.
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
empty drbdmanage control volume initialized.
empty drbdmanage control volume initialized.
Operation completed successfully
root@pve211:~# drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name   | Pool Size | Pool Free |                                                                   | State |
|------------------------------------------------------------------------------------------------------------|
| pve211 |     10240 |     10176 |                                                                   |    ok |
+------------------------------------------------------------------------------------------------------------+
root@pve211:~# drbdmanage add-node pve212 10.12.198.212
Operation completed successfully
Operation completed successfully

Executing join command using ssh.
IMPORTANT: The output you see comes from pve212
IMPORTANT: Your input is executed on pve212
You are going to join an existing drbdmanage cluster.
CAUTION! Note that:
  * Any previous drbdmanage cluster information may be removed
  * Any remaining resources managed by a previous drbdmanage installation
    that still exist on this system will no longer be managed by drbdmanage

Confirm:

  yes/no: yes
  Failed to find logical volume "drbdpool/.drbdctrl_0"
  Failed to find logical volume "drbdpool/.drbdctrl_1"
  Logical volume ".drbdctrl_0" created.
  Logical volume ".drbdctrl_1" created.
NOT initializing bitmap
initializing activity log
Writing meta data...
New drbd meta data block successfully created.
NOT initializing bitmap
initializing activity log
Writing meta data...
New drbd meta data block successfully created.
Error: Operation not allowed on satellite node

root@pve211:~# drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name   | Pool Size | Pool Free |                                     |                               State |
|------------------------------------------------------------------------------------------------------------|
| pve211 |     10240 |     10176 |                                     |                                  ok |
| pve212 |   unknown |   unknown |                                     | pending actions: adjust connections |
+------------------------------------------------------------------------------------------------------------+

Any idea?

Thank you
 
Hi e100 and thank you for your reply!


Code:
root@pve211:~# drbdadm status
.drbdctrl role:Secondary
  volume:0 disk:UpToDate
  volume:1 disk:UpToDate
  pve212 connection:Connecting

Code:
root@pve212:~# drbdadm status
root@pve212:~#

There is no output from the second node, it has lv metadata (created from node1) but it isn't in the cluster.

I have no idea of actions I can manually perform, I found documentation only for drbdmanage :-(
 
drbdmanage simply 'manages' drbd resources.
You should still be familiar with managing drbd else when things go wrong you will be in a terrible position.

on pve212 try to bring up the drbdmanage control resource with this command:
Code:
drbdadm up .drbdctrl

Post the output of that command and if it appears successful I'd like to see the output of "drbdadm status" from both nodes again.

The drbdmanage from the pve-test repo is a little less buggy.
 
Thank you again, e100...

Code:
root@pve212:~# drbdadm up .drbdctrl

no resources defined!
root@pve212:~# drbdsetup status
root@pve212:~#

Is there a supported way to upgrade only drbd from pve-test
I have now the pve-enterprise on all nodes...
 
Hi,

I had a similar problem when trying to create a DRBD9 cluster while wrong information (both addresses and aliases) were present in /etc/hosts (Proxmox on top of stock Debian). In my case one node would be added but refuse to connect.

I "solved" it by uninit everything DRDB9 after correcting /etc/hosts and recreating. The Operation not allowed on satellite node error was gone.
 
  • Like
Reactions: Kei
found the problem...
in my hosts file I had:
Code:
public-ip-211    pve211
public-ip-212    pve212
public-ip-213    pve213
I used a private ip (on different interface/vlan) for drbd
trying with only public ip is correct

so you think that is correct to set two hostname?
For example:

Code:
public-ip-211    pve211
public-ip-212    pve212
public-ip-213    pve213
private-ip-211    drbd211
private-ip-212    drbd212
private-ip-213    drbd213
 
I reply myself: I think no because the hostname that you use for add node must be the result of "uname -n"
So how to use a separate network for drbd?
 
....


Error: Operation not allowed on satellite node

root@pve211:~# drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name | Pool Size | Pool Free | | State |
|------------------------------------------------------------------------------------------------------------|
| pve211 | 10240 | 10176 | | ok |
| pve212 | unknown | unknown | | pending actions: adjust connections |
+------------------------------------------------------------------------------------------------------------+
[/CODE]

Any idea?

Thank you

This is a known issue. You need to reboot all nodes after you installed drbdmanage.

We added this hint already in our wiki.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!