DRBD Diskless after first reboot

fwf

New Member
Jan 4, 2018
1
0
1
41
Hi

Create 2 clean node

config DRBD (https://pve.proxmox.com/wiki/DRBD)

it`s work! Live migration worked!

reboot first node

version: 8.4.7 (api:1/proto:86-101)
srcversion: 2DCC561E7F1E3D63526E90D
0: cs:Connected ro:primary/Primary ds:UpToDate/Diskless C r-----
ns:4356 nr:20 dw:52 dr:25337 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0

reboot second node

version: 8.4.7 (api:1/proto:86-101)
srcversion: 2DCC561E7F1E3D63526E90D
0: cs:Connected ro:Secondary/Primary ds:Diskless/Diskless C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0



diskless comes only after the VM is created. Prior to create VM reboot do not crash drbd.
 
Hi,

this Wiki article is marked as outdated
"This article is about the previous Proxmox VE 3.x releases"

DRBD is not supported and also not tested with current Proxmox VE.
 
I've got DRBD setup on some 5.x servers using a setup similar to the old wiki article.

@fwf DRBD will end up diskless on reboot when it cannot find the disk you specified in the configuration.
How did you reference the disks in drbd config?

I've found that using /dev/sdX is a bad idea because sometimes on reboot the letter change, sda might become sdb.
Instead use the proper symlink in /dev/disk/by-id/

the filter in /etc/lvm/lvm.conf can be an issue too, especially if you use LVM inside the VMs.
This is what I am using in 5.x:
Code:
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/.*/vm-[0-9].*-disk-.*|", "r|/dev/disk/|", "r|/dev/block/|", "r|/dev/drbd[0-9]*-|", "r|/dev/dm-|", "r|/dev/mapper/|", "a/.*/" ]
WARNING: Do not blindly copy/paste that filter it might break your system or make it unbootable, be sure you understand what it is doing and test.

The lvm.conf filter is especially important if you use LVM on top of DRBD.
You can end up reading/writing to the physical disk instead of the drbd device with an improper lvm filter.

Also make sure the lvm.conf has:
Code:
use_lvmetad = 0

I've had issues with drbd startup order causing various issues, easy to fix:
Create this file: /etc/insserv/overrides/drbd
Code:
### BEGIN INIT INFO
# Provides: drbd
# Required-Start: $local_fs $network $syslog
# Required-Stop:  $local_fs $network $syslog
# Should-Start:   sshd multipathd
# Should-Stop:    sshd multipathd
# Default-Start:  2 3 4 5
# Default-Stop:   0 1 6
# X-Start-Before: heartbeat corosync pve-cluster
# X-Stop-After:   heartbeat corosync pve-cluster
# Short-Description:    Control drbd resources.
### END INIT INFO

Then run: update-rc.d drbd defaults
 
  • Like
Reactions: Sycoriorz
Hi

i have got an similar issue with the old wiki configuration and drbd.
Everything was running well. The issue came also after reboot.
When i run lsblk i got:

sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 256M 0 part
└─sda3 8:3 0 99.8G 0 part
├─pve-swap 253:1 0 4G 0 lvm [SWAP]
├─pve-root 253:2 0 24.8G 0 lvm /
├─pve-data_tmeta 253:3 0 60M 0 lvm
│ └─pve-data 253:5 0 58.6G 0 lvm
└─pve-data_tdata 253:4 0 58.6G 0 lvm
└─pve-data 253:5 0 58.6G 0 lvm
sdb 8:16 0 50G 0 disk
└─sdb1 8:17 0 50G 0 part
└─drbd0vg-vm--100--disk--1 253:0 0 16G 0 lvm
sdc 8:32 0 50G 0 disk
└─sdc1 8:33 0 50G 0 part
└─drbd1 147:1 0 50G 0 disk
sr0 11:0 1 1024M 0 rom

i dont know why but it seems to be when i was creating an vm the lvm device change from "drbd0" to "drbd0vg-vm--100--disk--1".
When i do an pvscan he will only find drbd1. there i dont create any vm and on lsblk the name is not changed.

Code:
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/.*/vm-[0-9].*-disk-.*|", "r|/dev/disk/|", "r|/dev/block/|", "r|/dev/drbd[0-9]*-|", "r|/dev/dm-|", "r|/dev/mapper/|", "a/.*/" ]
i place your offered lvm filter in lvm.conf without any changes

after that i get on pvscan

PV /dev/sdb1 VG drbd0vg lvm2 [50.00 GiB / 34.00 GiB free]
PV /dev/sda3 VG pve lvm2 [99.75 GiB / 12.25 GiB free]
PV /dev/drbd1 VG drbd1vg lvm2 [50.00 GiB / 50.00 GiB free]
Total: 3 [199.74 GiB] / in use: 3 [199.74 GiB] / in no VG: 0 [0 ]

He founds now my Volumegroups again but makes on my r0.res /dev/sdb1 instood of /dev/drbd0
How i will come back to drbd0?

I hope someone can help me with this problem.
If some more information is needed ask me.

Best regards

sycoriorz
 
Update:
I found my problem regarding above described issue.
It was the lvm filter
I have used following filter and now it works well again.
filter = [ "a|^/dev/drbd0|", "a|^/dev/drbd1|", "a|^/dev/sda3|", "r/.*/" ]

but i have the further problem that drbd dont starts automaticly.
I have to run everytime first /etc/init.d/drbd start
than it works

i tried this from above post
Code:
### BEGIN INIT INFO
# Provides: drbd
# Required-Start: $local_fs $network $syslog
# Required-Stop: $local_fs $network $syslog
# Should-Start: sshd multipathd
# Should-Stop: sshd multipathd
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# X-Start-Before: heartbeat corosync pve-cluster
# X-Stop-After: heartbeat corosync pve-cluster
# Short-Description: Control drbd resources.
### END INIT INFO
Then run: update-rc.d drbd defaults

But it will not work in my case.

Which info you need for help me

regards

sycoriorz
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!