Not really, it never happened to me. Does it happen if you also try to uncomment other defaults, such as max peers? (It won't affect cluster operation if you didn't previously change the defaults, so you can try).
Obviously this will be overwritten at every drbdmanage upgrade, and then DRBD won't be able to operate on drbdpool because it won't find any LVM thinpool.
I did the substitution prior to cluster creation and then changed the global config with
drbdmanage modify-config
# and uncommenting the...
Since drbdmanage 0.94 and manually switching to the LVM storage plugin I didn't have any problems or crashes (related to DRBD, that is) on a 3 nodes DRBD9 cluster. Obviously I'm keeping an eye on those.
Hi,
I had a similar problem when trying to create a DRBD9 cluster while wrong information (both addresses and aliases) were present in /etc/hosts (Proxmox on top of stock Debian). In my case one node would be added but refuse to connect.
I "solved" it by uninit everything DRDB9 after...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.