DRBD Assistance

adamb

Famous Member
Mar 1, 2012
1,329
77
113
I have a 2 node cluster using DRBD in a primary/primary setup.

I am trying to migrate my DRBD connection from a 1GB adapter to a 10GB adapter. Here is what I am doing.

Migrate all VM's to node1

Changing the following files on each node.
/etc/drbd.d/r0.res
/etc/drbd.d/r1.res

Then issue "drbdadm adjust all" which makes the network changes take effect.

I end up with this.

Node1
root@fiosprox1:~# cat /proc/drbd
version: 8.3.10 (api:88/proto:86-96)
GIT-hash: 5c0b0469666682443d4785d90a2c603378f9017b build by phil@fat-tyre, 2011-01-28 12:17:35
0: cs:StandAlone ro:primary/Unknown ds:UpToDate/DUnknown r-----
ns:0 nr:0 dw:29750800 dr:20928336 al:54373 bm:2960 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:100920
1: cs:StandAlone ro:primary/Unknown ds:UpToDate/DUnknown r-----
ns:0 nr:0 dw:7332996 dr:12354055 al:343 bm:285 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:7932

Node2
root@fiosprox2:~# cat /proc/drbd
version: 8.3.10 (api:88/proto:86-96)
GIT-hash: 5c0b0469666682443d4785d90a2c603378f9017b build by phil@fat-tyre, 2011-01-28 12:17:35
0: cs:StandAlone ro:Secondary/Unknown ds:UpToDate/DUnknown r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
1: cs:StandAlone ro:Secondary/Unknown ds:Outdated/DUnknown r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0


No biggie as all my VM's are on Node1. Here is what I am doing.
Node2 - "drbdadm -- --discard-my-data connect r0"
Node1 - "drbdadm connect r0"

Once I do this node2 kernel panics and locks up. I have to reset the server using IPMI.

I can set my /etc/drbd.d/r0.res and /etc/drbd.d/r1.res back to the old address and DRBD will sync back up.

Not sure what the issue is. Network on the 10GB adapters is good, I can ping and ssh between the two machines. I must be missing something very simple.
 
Ended up doing a complete re install on both nodes. Same issue with DRBD. As soon as the sync starts the target node will panic and die. The host which is still up reports a network failure in /proc/drbd. Networking seems 100%, ping, ssh, all works.

I also found out that I have an issue with cluster communication over this link. When attempting a live migration over the 10GB link the target node will kernel panic and die. Has anyone seen such an issue?

I installed ethtool and confirmed that the nic is connecting at 10GB, no errors or dropped packets. Nothing in the logs and the card seems to function properly. This has me stumped. I appreciate any and all feedback! I am currently waiting on our hardware department to bring me two fresh nodes to start testing this issue out on.
 
why don´t you test with the latest version?

drbd 8.3.10 is quite outdated.
 
why don´t you test with the latest version?

drbd 8.3.10 is quite outdated.

Should be the latest as this was a fresh install. That version was before I did a complete re install. Here is what I am on now.

root@fiosprox1:~# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.13 (api:88/proto:86-96)
 
no, our latest kernel uses already 8.3.13, but this also not latest from the 8.3.x branch. if you use DRBD, you should use latest version of kernel module and also userspace tools (compile by yourself).

best would be you go for a DRBD support subscription (linbit.com). AFAIK, the 8.3.x branch is out of support by the end of this year (http://www.drbd.org/home/releases/).
 
Sorry for jumping in this thread, but I'm curious about DRDB and I'm collecting info and going to test in a simulated virtual environment.
I thought that Proxmox is giving a working, updated DRBD solution, am I wrong? The wiki is not mentioning that current version that comes with Proxmox ve is not working fine and you have to try to find a suitable updated kernel module and compile userspace tools ourself.
DRDB with 2 nodes seems very interesting to me, but if is not fully supported by Proxmox I stop investigating it.
Thanks a lot
 
Sounds good I will try the latest.

I don't feel this is the issue though because of the issues we see with cluster communication over 10GB also. During live migration the target node will kernal panic and drop. I can go back to 1GB connection for cluster communication and all is well. Just doesn't add up. Can anyone who has dealt with 10GB comment on this issue?
 
Sounds good I will try the latest.

I don't feel this is the issue though because of the issues we see with cluster communication over 10GB also. During live migration the target node will kernal panic and drop. I can go back to 1GB connection for cluster communication and all is well. Just doesn't add up. Can anyone who has dealt with 10GB comment on this issue?
Hi Adam,
something wrong with your 10G-connection (driver/network)?

I have running drbd over 10G very very stable.
How fast is your 10GB connection (both directions) if you meassure with iperf? Perhaps you see there an problem?
Can you post your drbd-configs?
Which 10G-Nic do you use? Can you post the output of "lspci -v" of the 10G-Nic?

Udo
 
I will admit I did not compile the userland tools to match, but in the past with 1GB adapters I did not need to.

Here is the output of lspci v for the 10GB NIC. It is a dual port broadcom, we currently use these out in the field with DRBD on CentOS. This is a server proven card specifically supported by our IBM hardware.

20:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57712 10 Gigabit Ethernet (rev 01)
Subsystem: Broadcom Corporation Device 1202
Flags: bus master, fast devsel, latency 0, IRQ 42
Memory at fb000000 (64-bit, prefetchable) [size=8M]
Memory at fa800000 (64-bit, prefetchable) [size=8M]
Memory at fb870000 (64-bit, prefetchable) [size=64K]
Expansion ROM at f3800000 [disabled] [size=256K]
Capabilities: [48] Power Management version 3
Capabilities: [50] Vital Product Data
Capabilities: [58] MSI: Enable- Count=1/8 Maskable- 64bit+
Capabilities: [a0] MSI-X: Enable+ Count=17 Masked-
Capabilities: [ac] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [13c] Device Serial Number 00-10-18-ff-fe-d6-13-b0
Capabilities: [150] Power Budgeting <?>
Capabilities: [160] Virtual Channel
Capabilities: [1b8] Alternative Routing-ID Interpretation (ARI)
Capabilities: [220] #15
Kernel driver in use: bnx2x

I am starting to think it possibly could have been the Cat 7 cable I was using. Even though networking seemed 100%, ssh, ping, everything seemed good. Still waiting on test hardware to figure this out. I will update this thread with my findings.
 
Looks like its the 10GB network. No switch just a direct connection between 10GB nics. All seems well for ping, ssh and so on. I attempted a heavy transfer using scp and the target node crashed pretty hard. Atleast this all adds up as to why I was seeing issues with cluster communication also.

We use these cards currently in a DRBD scenerio on CentOS. Looks like we are using these drivers.

Should I look to get the most up-to-date drivers or try to go back to what I know works.

[root@supportHA1 ~]# uname -r
2.6.32-220.7.1.el6.x86_64


[root@supportHA1 ~]# ethtool -i eth0
driver: bnx2x
version: 1.70.00-0
firmware-version: bc 6.2.15 phy 4.f
bus-info: 0000:1b:00.0


Proxmox is using a bit newer
root@medprox1:/tmp/drbd/drbd-8.3# ethtool -i eth0
driver: bnx2x
version: 1.72.00-0
firmware-version: bc 6.2.15 phy 4.f
bus-info: 0000:20:00.0
 
Last edited:
Looking to compile the latest drivers I guess. Any tips or suggestions on this? I have compiled a ton of code in centos/redhat but not much of anything in debain/ubuntu.
 
apt-get install build-essential pve-headers-2.6.32-16-pve

After that simply follow the instructions in the driver source.

I appreciate the input! So far I have made it this far but get a number of errors when compiling.

root@medprox1:/usr/local/src/netxtreme2-7.2.20# make
make -C bnx2/src KVER=2.6.32-16-pve PREFIX=
make[1]: Entering directory `/usr/local/src/netxtreme2-7.2.20/bnx2-2.72.13/src'
make -C /lib/modules/2.6.32-16-pve/build SUBDIRS=/usr/local/src/netxtreme2-7.2.20/bnx2-2.72.13/src modules
make[2]: Entering directory `/usr/src/linux-headers-2.6.32-16-pve'
CC [M] /usr/local/src/netxtreme2-7.2.20/bnx2-2.72.13/src/bnx2.o
CC [M] /usr/local/src/netxtreme2-7.2.20/bnx2-2.72.13/src/cnic.o
Building modules, stage 2.
MODPOST 2 modules
CC /usr/local/src/netxtreme2-7.2.20/bnx2-2.72.13/src/bnx2.mod.o
LD [M] /usr/local/src/netxtreme2-7.2.20/bnx2-2.72.13/src/bnx2.ko
CC /usr/local/src/netxtreme2-7.2.20/bnx2-2.72.13/src/cnic.mod.o
LD [M] /usr/local/src/netxtreme2-7.2.20/bnx2-2.72.13/src/cnic.ko
make[2]: Leaving directory `/usr/src/linux-headers-2.6.32-16-pve'
make[1]: Leaving directory `/usr/local/src/netxtreme2-7.2.20/bnx2-2.72.13/src'
make -C bnx2x/src KVER=2.6.32-16-pve PREFIX=
make[1]: Entering directory `/usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src'
make -C /lib/modules/2.6.32-16-pve/build M=`pwd` modules
make[2]: Entering directory `/usr/src/linux-headers-2.6.32-16-pve'
CC [M] /usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src/bnx2x_main.o
In file included from /usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src/bnx2x.h:46,
from /usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src/bnx2x_main.c:93:
/usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src/bnx2x_compat.h:828: error: static declaration of ‘usleep_range’ follows non-static declaration
include/linux/delay.h:48: note: previous declaration of ‘usleep_range’ was here
/usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src/bnx2x_compat.h:1543: error: redefinition of ‘skb_frag_page’
include/linux/skbuff.h:1637: note: previous definition of ‘skb_frag_page’ was here
/usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src/bnx2x_compat.h:1548: error: redefinition of ‘skb_frag_dma_map’
include/linux/skbuff.h:1754: note: previous definition of ‘skb_frag_dma_map’ was here
make[3]: *** [/usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src/bnx2x_main.o] Error 1
make[2]: *** [_module_/usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src] Error 2
make[2]: Leaving directory `/usr/src/linux-headers-2.6.32-16-pve'
make[1]: *** [bnx2x.o] Error 2
make[1]: Leaving directory `/usr/local/src/netxtreme2-7.2.20/bnx2x-1.72.18/src'
make: *** [l2build] Error 2
 
Last edited:
Directly from broadcom.

http://www.broadcom.com/support/ethernet_nic/netxtremeii10.php

Definitely never had a nic issue like this. Typically they work or don't and if anything have performance issues.

I can't even get iperf to run, but I sure can ping and ssh.

root@medprox1:~# iperf -c 10.211.46.2
connect failed: Connection refused


root@medprox1:~# ping 10.211.46.2
PING 10.211.46.2 (10.211.46.2) 56(84) bytes of data.
64 bytes from 10.211.46.2: icmp_req=1 ttl=64 time=0.171 ms
64 bytes from 10.211.46.2: icmp_req=2 ttl=64 time=0.234 ms
^C
--- 10.211.46.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.171/0.202/0.234/0.034 ms


root@medprox1:~# ssh root@10.211.46.2
Linux medprox2 2.6.32-16-pve #1 SMP Fri Nov 9 11:42:51 CET 2012 x86_64


The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.


Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Dec 7 12:25:29 2012 from 10.211.46.1
root@medprox2:~# exit
logout
Connection to 10.211.46.2 closed.
root@medprox1:~#
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!