No
You would create the drbdmanage clusters then everything else you would do in the GUI.
Proxmox creates a drbdmanage resource with a single volume for each individual virtual disk you create.
Attached is the GFP_NOIO patch I wrote for the mthca driver.
I wrote it against the latest pve-kernel source that uses the 4.4 kernel from ubuntu-xenial
I'll let it bake in my test cluster for a few days and report back if it resolved the issues or not.
The main advantage to drbd9 with Proxmox is each VM disk you create is an independent DRBD resource.
You can still manually configure drbd9 just like drbd8 if you want. Drbdmanage is just a new tool that makes managing drbd resources easier (well easier once they get all the bugs shaken out of...
I think I might be suffering from a different known bug that has not been fixed for my IB driver.
Back in 2014 the mlx4 driver was updated to use GFP_NOIO for QP creation when using connected mode and the IPoIB driver was updated to request GFP_NOIO from the hardware drivers.
This was to...
When I was using only CEPH never had an issue, started setting up DRBD and have had nothing but network problems in the Infiniband.
I'd be happy to test a patched kernel once its available.
I've seen those bugs, the problem I am having does not result in just poor performance but complete network outage.
Could be the same bug but my symptoms seems to be a little different.
I downgraded the kernel to 4.2.8-37that Andrey reported as not having problems but it made no difference for me.
I'll not argue with that, as much as I hate to say it 4.x is not production ready yet.
I had another incident of the IB network interface failing. Again I found processes that were making a connection from the ethernet IP to another server's IPOIB IP address. Instead of DRBD it looked like it...
I think I discovered my issue.
Some of the DRBD resources were listening on the Infiniband and others on the Ethernet.
After get everything listening on on Infiniband all seems to work just fine.
Apparently I messed up when adding one of the nodes and forgot to specify the IP address of the IB...
I can confirm that this is a real problem.
Here is what I know:
1. There are no kernel messages when the IPoIB network stops working
2. If I ifdown then ifup the IPoIB interface the IB network starts working again
3. Only seems to happen if server is under heavy load from ( lots of IO and/or...
I was surprised how much slower DRBD9 is compared to 8.x without tuning.
I've only been using DRBD9 for about a week and applying some of these settings did manage to cause drbdmanage to stop working temporally due to some hung 'drbdadm adjust' processes. I had to reboot some nodes to get...
So it is perfectly acceptable to create multiple drbdmanage clusters within a single Proxmox cluster? Provided one also sets up the proper node restrictions in storage.cfg.
I did have a chance to look at the Proxmox code and it seems proper.
The only thought I have is dd opening the device for...
Running in diskless mode could negatively impact read performance.
An example might be a read IO heavy database VM, thats the sort of VM I would not want to accidentally live migrate to a diskless node.
If I specify a DRBD storage limited to only two nodes of a 5 node DRBD cluster will Proxmox...
Seems like there is a race condition between DRBD making the volume available and KVM trying to use it.
I've not had time to look at the Proxmox code yet so I'm not really sure what Proxmox is trying to do.
Is Proxmox asking DRBD to do anything or does it simply assume that the volume is there...
I've setup a 3 node DRBD cluster server names vm1, vm2 and vm3.
I created DRBD storage with replication set to 2:
drbd: drbd2
redundancy 2
content images,rootdir
I created a DRBD disk for VM 110, The disk is created and is using servers VM1 and VM2...
I believe that Proxmox simply passes the redundancy value to drbdmanage. So what nodes get used is up to drbdmanage and how its configured.
http://drbd.linbit.com/en/doc/users-guide-90/s-dm-new-volume
Default is to use nodes with the most free space
As best I can tell the problem is with the kvm-qemu live backup code.
The live backup reads very small amounts of data at a time, something like 64k I think.
Then each object in CEPH is 4M if I remember correctly.
So what happens is the backup code reads the same CEPH object 64 times, with each...
I've just started using DRBD9 and Proxmox 4.1 this week.
AFAIK drbdmanage does not support multiple storage pools.
Each node can have a single storage plugin that points to a single pool and currently there are only three storage plugins...
See: https://forum.proxmox.com/threads/drbd9-multiple-volume-groups.24019/
Also, I've not tested the idea and its not for a novice, but this might work:
Setup DRBD9 according to: https://pve.proxmox.com/wiki/DRBD9 but with two differences:
1. Put both SAS and SATA arrays in the same drbdpool...
I have a few dual socket servers and want to know how best to configure VMs.
Should VMs always have NUMA enabled and have CPU sockets set to the number of physical sockets?
Some VMs only need a single socket and single core, these should have NUMA enabled too?
Some VMs only need two cores, is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.