Error configuring synchronization between nodes. DRBD

D

doknet

Guest
I want to configure the system to sync with drbd,

I follow the steps set out in
http://pve.proxmox.com/wiki/DRBD#Network

In each machine (server1 and server2) I have two eth connected via a crossover cable between the eth1 ( networking 10.0.7.0 )


server1
Code:
cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet static
        address  10.0.7.105
        netmask  255.255.240.0

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.201
        netmask  255.255.240.0
        gateway  192.168.2.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

Server2

Code:
And from the second node:
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet static
        address  10.0.7.106
        netmask  255.255.240.0

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.202
        netmask  255.255.240.0
        gateway  192.168.2.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
But in both cases in the web interface tells me that the node has the IP of eth0 and do not of eth1 and when I continue on this path when creating the cluster fails me watch it authenticates with the network of eth0 (192.168.1.0) and not network eth1.
Because the connection between the machines is set by the network card eth1


SERVER1
Code:
cat /etc/hosts
127.0.0.1       localhost
10.0.7.105 vm1
192.168.1.201 vm1.iuoglocal vm1pvelocalhost
10.0.7.106      vm2
# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

en el server1 (vm1) , web interface

System
Interface Active Ports/Slaves Autostart IP Address Subnet Mask Gateway
eth1 yes 10.0.7.105 255.255.240.0
vmbr0 yes eth0 192.168.1.201 255.255.255.0 192.168.1.1

Cluster
Hostname IP Address Role State Uptime Load CPU IODelay Memory Disk
vm1 192.168.1.201 Master active 00:10 0.00 0% 0% 4% 1%

Server2
Code:
 cat /etc/hosts
127.0.0.1       localhost
10.0.7.106 vm2
192.168.1.202 vm2.iuoglocal vm2 pvelocalhost
10.0.7.105      vm1
# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts


en server2 web interface

Code:
Interface	Active	Ports/Slaves	Autostart	IP Address	Subnet Mask	Gateway
	eth1	yes			10.0.7.106	255.255.240.0	 
	vmbr0	yes	eth0		192.168.1.202	255.255.255.0	192.168.1.1

cluster
Code:
Hostname	IP Address	Role	State	Uptime	Load	CPU	IODelay	Memory	Disk
vm2	192.168.1.202	-	active	00:36	0.00	0%	0%	3%	1%

You can change the ip of the node, to take the ip network made with eth1?

I ask this because if I continue down this path when creating the cluster master and try to synchronize the 192.168.0.0 network and not through the 10.0.7.0 network cards


step 1

server1
Code:
 pveca -c 
 
 pvca -l
  pveca -l
CID----IPADDRESS----ROLE-STATE--------UPTIME---LOAD----MEM---DISK
 1 : 192.168.1.201      M     A           00:14   0.06     4%     1%


En server 2


Code:
vm2:/etc/pve# pveca -a -h 192.168.1.201
The authenticity of host '192.168.1.201 (192.168.1.201)' can't be established.
RSA key fingerprint is 96:3d:7c:94:b7:fe:29:6d:c8:29:a9:da:96:e6:86:80.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.201' (RSA) to the list of known hosts.
root@192.168.1.201's password:



And get error


Server1
Code:
Hostname	IP Address	Role	State	Uptime	Load	CPU	IODelay	Memory	Disk
vm1	192.168.1.201	Master	nosync	00:52	0.08	0%	0%	4%	1%
vm2	192.168.1.202	Node	ERROR: 500 Can't connect to 127.0.0.1:50000 (connect: Connection refused)

server2
Hosts

Code:
Hostname	IP Address	Role	State	Uptime	Load	CPU	IODelay	Memory	Disk
vm1	192.168.1.201	Master	ERROR: Ticket authentication failed - invalid ticket 'root::root::1281214686::dfdf121dfee04bc947943653bde4c28be06e9a7f::e5ca3ad03b2c0b1dcab2dec527b279d6e948e25f'
vm2	192.168.1.202	Node	nosync	01:17	0.00	0%	0%	4%	1%


If in server2

pveca -a -h 10.0.7.105

Also fails

Any idea where my errors

Sorry my inglis
 
Last edited by a moderator:
pve always use the address of vmbr0 (or eth0). But I can't see how that relates to DRBD? What exactly is the problem?
 
pve always use the address of vmbr0 (or eth0). But I can't see how that relates to DRBD? What exactly is the problem?

thanks for reply I want to get synchronized with drbd.
But so far not come to use it because I´m standing on the first face.
Following the wiki . http://pve.proxmox.com/wiki/DRBD,


Make sure you run at least Proxmox VE 1.4 (currently beta) on both servers and create the well known standard Proxmox VE Cluster.
(http://pve.proxmox.com/wiki/Proxmox_VE_Cluster)

1) first install Proxmox on the two servers (server1 and server2) (not charge any virtual machine) in both server I have two network cards


server1
Code:
cat /etc/hosts
127.0.0.1       localhost
192.168.1.201 vm1.iuoglocal vm1 pvelocalhost
10.0.7.106  vm2


cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet static
        address  10.0.7.105
        netmask  255.255.240.0

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.201
        netmask  255.255.255.0
        gateway  192.168.1.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0


Server2
Code:
cat /etc/hosts
127.0.0.1       localhost
192.168.1.201 vm1.iuoglocal vm1 pvelocalhost
10.0.7.106  vm2

And from the second node:
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet static
        address  10.0.7.106
        netmask  255.255.240.0

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.202
        netmask  255.255.255.0
        gateway  192.168.1.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

Proxmox_VE_Cluster


a) pveca -l
server1
local node '192.168.1.201' not part of cluster
server2
local node '192.168.1.202' not part of cluster
2) Luego en server 1
pveca -c
3) en server2
pveca -a -h 192.168.1.201 (use this address because it´s the volume promox, as you say takes eth0 ... but in the wiki drbd servers used to synchronize eth1)

4) Here was this error
server1
Hostname IP Address Role State Uptime Load CPU IODelay Memory Disk
vm1 192.168.1.201 Master nosync 17:36 0.00 0% 0% 4% 1%
vm2 192.168.1.202 Node ERROR: Ticket authentication failed - invalid ticket 'root::root::1281300469::b192a1f8e098b86418367b1ed04dd7bf263521df::d26170a048bd9f4f599a4c9a5b7883382d204bb5'

ERROR: Ticket authentication failed - invalid ticket
El error estaba en que los dos servidores tenian diferente date.

5) This error was because they had different dates on servers now working fine
Code:
server1
Cluster Nodes
Hosts	
Hostname	IP Address	Role	State	Uptime	Load	CPU	IODelay	Memory	Disk
vm1	192.168.1.201	Master	active	18:11	0.00	0%	0%	9%	1%
vm2	192.168.1.202	Node	active	18:31	0.26	0%	0%	4%	1%

server2
Cluster Nodes
 

Hosts	
	
Hostname	IP Address	Role	State	Uptime	Load	CPU	IODelay	Memory	Disk
vm1	192.168.1.201	Master	active	18:12	0.00	0%	0%	9%	1%
vm2	192.168.1.202	Node	active	18:32	0.17	0%	0%	4%	1%



http://pve.proxmox.com/wiki/Proxmox_VE_Cluster




Now I want to move forward with drbd, and need the sicronizacion be done through crossover with eth1
(http://pve.proxmox.com/wiki/Proxmox_VE_Cluster)


System requirements

You need 2 identical Proxmox VE servers with the following extra hardware:
Free network card (connected with a direct crossover cable)
Second raid volume (e.g. /dev/sdb)
Use a hardware raid controller with BBU to eliminate performance issues concerning internal metadata (see Florian´s blog).
Preparations

If I keep drbd tutorial, but the relationship of trust is established in the network cards eth0,
How to change that?

a ) Change the ip 192.168.1. por 10.0.7. ? in /etc/pve/cluster.cfg and authorized_keys ; known_hosts:

thanks for reply and sorry for my inglis.
 
Last edited by a moderator:
Thank dietmar

I continue with the configuration according to the wiki, but I made this change in some files:
And follow the steps of the wiki, but in the end I have problem with drbd I ask if you can tell me where you can be the root of the

1) In both server change
Code:
/etc/pve/cluster.cfg and  authorized_keys ; known_hosts:
chante  192.168.1.201 > 10.0.7.105 and  192.168.1.202 >10.0.7.106

and it seems to work  right
server1
Hostname	IP Address	Role	State	Uptime	Load	CPU	IODelay	Memory	Disk
vm1	10.0.7.105	Master	active	20:40	0.00	0%	0%	9%	1%
vm2	10.0.7.106	Node	active	21:01	0.00	0%	0%	4%	1%


2) I have two discs as shown in the wiki
Code:
in /sda install proxmox
in /sdb one partition (as shown in the wiki, t 8e)
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       60801   488384001   8e  Linux LVM
and 
[B] format  mkfs.ext3 /dev/sdb1 In both server [/B]
3) In both server set /etc/drbd.conf
Code:
global { usage-count no; }
common { syncer { rate 30M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;     # wfc-timeout can be dangerous (http://forum.proxmox.com/threads/3465-Is-it-safe-to-use-wfc-timeout-in-DRBD-configuration)
                degr-wfc-timeout 60;
                become-primary-on both;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "my-secret";
                allow-two-primaries;
                after-sb-0pri discard-zero-changes;
                after-sb-1pri discard-secondary;
                after-sb-2pri disconnect;
        }
        on vm1 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 10.0.7.105:7788;
                meta-disk internal;
        }
        on vm2 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 10.0.7.106:7788;
                meta-disk internal;
        }
}

4) When I try to start drbd it gives me a similar error in both server

/etc/init.d/drbd start
Code:
Starting DRBD resources:[ d(r0) 0: Failure: (119) No valid meta-data signature found.

        ==> Use 'drbdadm create-md res' to initialize meta-data area. <==


[r0] cmd /sbin/drbdsetup 0 disk /dev/sdb1 /dev/sdb1 internal --set-defaults --create-device  failed - continuing!

s(r0) n(r0) ]..........
***************************************************************
 DRBD's startup script waits for the peer node(s) to appear.
 - In case this node was already a degraded cluster before the
   reboot the timeout is 60 seconds. [degr-wfc-timeout]
 - If the peer was available before the reboot the timeout will
   expire after 15 seconds. [wfc-timeout]
   (These values are for resource 'r0'; 0 sec -> wait forever)
 To abort waiting enter 'yes' [  14]:
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
.
vm1:~/.ssh# ps -ef |grep drbd
root     22627 17425  0 20:26 pts/1    00:00:00 grep drbd

6) status /etc/init.d/drbd status
Code:
drbd driver loaded OK; device status:
version: 8.3.4 (api:88/proto:86-91)
GIT-hash: 70a645ae080411c87b4482a135847d69dc90a6a2 build by root@oahu, 2010-04-15 10:24:43
m:res  cs            ro  ds  p  mounted  fstype
0:r0   Unconfigured

7) drbdadm create-md r0
Code:
md_offset 500105211904
al_offset 500105179136
bm_offset 500089913344

Found ext3 filesystem
   488384000 kB data area apparently used
   488369056 kB left usable by current configuration

Device size would be truncated, which
would corrupt data and result in
'access beyond end of device' errors.
You need to either
   * use external meta data (recommended)
   * shrink that filesystem first
   * zero out the device (destroy the filesystem)
Operation refused.

Command 'drbdmeta 0 v08 /dev/sdb1 internal create-md' terminated with exit code 40
drbdadm create-md r0: exited with code 40
vm1:~/.ssh# drbdadm up r0
0: Failure: (119) No valid meta-data signature found.

        ==> Use 'drbdadm create-md res' to initialize meta-data area. <==

Command 'drbdsetup 0 disk /dev/sdb1 /dev/sdb1 internal --set-defaults --create-device' terminated with exit code 10
como corrijo ese error ?

8) drbdadm up r0
Code:
vm1:~/.ssh# drbdadm up r0
0: Failure: (119) No valid meta-data signature found.

        ==> Use 'drbdadm create-md res' to initialize meta-data area. <==

Command 'drbdsetup 0 disk /dev/sdb1 /dev/sdb1 internal --set-defaults --create-device' terminated with exit code 10


Sorry eny idea where was my erro, every step of the system command gave me options, do not apply any quic, on the possibility to introduce new problems

Could someone guide me where my error
 
Last edited by a moderator:
Is there already an ext3 filesystem on /dev/sbd1 ?

Ok, thanks for answering, I'll try to do it.

This is correct?

if=/dev/zero bs=1M count=1 of=/dev/sdb1; sync

Thanks for your help I could get to the settings.

Every 2.0s: cat /proc/drbd Mon Aug 9 18:39:29 2010

version: 8.3.4 (api:88/proto:86-91)
GIT-hash: 70a645ae080411c87b4482a135847d69dc90a6a2 build by root@oahu, 2010-04-15 10:24:43
0: cs:SyncSource ro:primary/Secondary ds:UpToDate/Inconsistent C r----
ns:244171776 nr:0 dw:0 dr:244172432 al:0 bm:14903 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:244197280
[=========>..........] sync'ed: 50.0% (238472/476920)M
finish: 1:43:49 speed: 39,188 (30,712) K/sec

I make one last question,
supports drbd sincronizacion, vm, both full, as those created for templates?

Thanks
 
Last edited by a moderator:
Sorry Dietmar, if I am trespassing in my request for help, but the situation created doubt and I do not know how to resolve it.
Dietmar apologize if I am exceeding my limit for assistance.
But recently I´m using Proxmox and was not familiar with lvm and drbd.
Now I have doubts about whether to mount or not the system (device of drbd).
Add my request to post http://forum.proxmox.com/threads/3644-drbd-mount (which appears first in google on the subject)
If you could answer , or someone on the team would be important.
From already thank you very much
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!