Kernel panic with Proxmox 4.1-13 + DRBD 9.0.0

I'm not sure if its working as I've not tried but I think in DRBD9 with drbdmanage it can be done online.
limited info but see:
man drbdmanage-resize-volume

Do you mind if I ask how you are currently using DRBD9? If I'm gathering things correctly, you have created the DRBD9 volumes outside of all of the Proxmox integration, correct? If that's the case, what source did you use to install the DRBD9 packages? I'm trying desperately to reinstall my two-node cluster with Proxmox 4 and I'm scared to death that if I install DRBD9 from Proxmox's repositories I would be subject to them changing something in an update that renders my installation unusable since they are making it obvious in other posts that they don't have any thought to those of us that only want a two node setup.

In my old setup on Proxmox 3 I had multiple DRBD volumes which spanned a pair of disks each with 1 being in each of the nodes. For me, manual intervention is ok as long as there is a path to quickly bring a VM back up that was on a node that had a hardware failure. All of this worked great on my old installation and from everything I'm reading regarding DRBD9 on Linbit's site indicates that DRBD9 still works perfectly in a two node cluster despite what Proxmox seems to indicate so my hope is that I can just install DRBD9 independently of all of the Proxmox stuff and just let Promox handle the storage as LVM volumes like it did in Proxmox 3.... is this possible?
 
it worked great for about 3 weeks, then one of the nodes crashed with a kernel panic.

Hi nicorac.

Maybe the problem of kernel panic is due to that "LVM thin provisioning" isn't stable (if is that you are using it), but i am not sure about of this.

Have you tested "LVM2" instead of "LVM thin provisioning"?
"LVM2" has many years and is very mature (in contrast to "LVM thin provisioning").

It's also worth mentioning.... DRBD9 seems to work just fine. The problems I've had have been with Drbdmanage and known kernel bugs with Infiniband.

Hi e100

Please, let me to do a questions:
1- Are you using LVM thin provisioning with DRBD9?
2- Do you have DRBD9 in a production environment?
3- Do you have intensive tasks of write in your DRBD resource?
 
Last edited:
The main advantage to drbd9 with Proxmox is each VM disk you create is an independent DRBD resource.

You can still manually configure drbd9 just like drbd8 if you want. Drbdmanage is just a new tool that makes managing drbd resources easier (well easier once they get all the bugs shaken out of it)

A better solution, IMHO, would be to setup multiple drbdmanage clusters within your Proxmox cluster. For example if you had four servers A,B,C and D you could make A and B one drbdmanage cluster and C and D another. That way you can still take advantage of the Proxmox integration.

Hi e100

Some comments:
a) To continuation i write a copy of a answer from Dietmar for me:
"Each created resource has the size you specify for the virtual disk - that is how it already works."

b) I don't have practice with PVE 4.x, nor with DRBD 9.x

c) Soon i will need install some PVE nodes with DRBD for a production environment, then, i need resolve my strategy.

d) The PVE wiki of DRBD9 (https://pve.proxmox.com/wiki/DRBD9) does not fit to my pretensions because i believe that:
1- The DRBD resource has all the size of the hard disk (inconvenience due to the delay that will exist to complete the verification of DRBD resources).
2- I would like that each DRBD resource only has the size of the virtual disk.

So please, let me to do some questions:

What i would like to know:
1) The 3 questions of my previous post (https://forum.proxmox.com/threads/k...ox-4-1-13-drbd-9-0-0.26194/page-2#post-135504)

2- For DRBD 8.x, I'm used to create 2 partitions, each one for each PVE Node, (in case of problems of "oos" - out of sync -, and do a easy recuperation without turn off nothing), but if with DRBD 9x each resource have the same size that the virtual disk, such practice no longer makes sense, ¿right?, or I missed something?

3- Can you tell me the steps that i need to know for achieve my objectives?
Notes about of this question:
a) Until now, i dont know if LVM should be on top of DRBD, or vice versa.
b) Until now, as part of my work of configuration, i don't know when i should work by the CLI, and when i should work by the PVE GUI.
c) My purpose is to have several PVE nodes, and only 2 pairs with DRBD9 configured at style of DRBD 8.x, ie, it only replicating to his peer node, and with network connections in mode NIC-to-NIC.

4- For reasons of performance, is possible avoid LVM thin provisioning in the configuration?

5- For reasons of performance (speed of access to disk), with DRBD9, is possible that each PVE Node use a hard disk different for create all his virtual disks? (the PVE Wiki DRBD9 says "DRBD will search for the LVM Volume Group drbdpool", and i am not sure if this will be a inconvenient because I'll be using two hard drives with DRBD enabled for each pair of PVE Nodes).

I hope that so many questions not be a hassle for you.

Best regards

Note: This post has been re edited.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!