Search results

  1. E

    Kernel panic with Proxmox 4.1-13 + DRBD 9.0.0

    No You would create the drbdmanage clusters then everything else you would do in the GUI. Proxmox creates a drbdmanage resource with a single volume for each individual virtual disk you create.
  2. E

    Proxmox VE 4.1 Infiniband problem

    Attached is the GFP_NOIO patch I wrote for the mthca driver. I wrote it against the latest pve-kernel source that uses the 4.4 kernel from ubuntu-xenial I'll let it bake in my test cluster for a few days and report back if it resolved the issues or not.
  3. E

    Kernel panic with Proxmox 4.1-13 + DRBD 9.0.0

    The main advantage to drbd9 with Proxmox is each VM disk you create is an independent DRBD resource. You can still manually configure drbd9 just like drbd8 if you want. Drbdmanage is just a new tool that makes managing drbd resources easier (well easier once they get all the bugs shaken out of...
  4. E

    Proxmox VE 4.1 Infiniband problem

    The expense of replacing them is the only thing stopping me. If it's not a huge amount of money I could likely pay someone to patch mthca.
  5. E

    Proxmox VE 4.1 Infiniband problem

    I think I might be suffering from a different known bug that has not been fixed for my IB driver. Back in 2014 the mlx4 driver was updated to use GFP_NOIO for QP creation when using connected mode and the IPoIB driver was updated to request GFP_NOIO from the hardware drivers. This was to...
  6. E

    Proxmox VE 4.1 Infiniband problem

    When I was using only CEPH never had an issue, started setting up DRBD and have had nothing but network problems in the Infiniband. I'd be happy to test a patched kernel once its available.
  7. E

    Proxmox VE 4.1 Infiniband problem

    I've seen those bugs, the problem I am having does not result in just poor performance but complete network outage. Could be the same bug but my symptoms seems to be a little different. I downgraded the kernel to 4.2.8-37that Andrey reported as not having problems but it made no difference for me.
  8. E

    Proxmox VE 4.1 Infiniband problem

    I'll not argue with that, as much as I hate to say it 4.x is not production ready yet. I had another incident of the IB network interface failing. Again I found processes that were making a connection from the ethernet IP to another server's IPOIB IP address. Instead of DRBD it looked like it...
  9. E

    Proxmox VE 4.1 Infiniband problem

    I think I discovered my issue. Some of the DRBD resources were listening on the Infiniband and others on the Ethernet. After get everything listening on on Infiniband all seems to work just fine. Apparently I messed up when adding one of the nodes and forgot to specify the IP address of the IB...
  10. E

    Proxmox VE 4.1 Infiniband problem

    I can confirm that this is a real problem. Here is what I know: 1. There are no kernel messages when the IPoIB network stops working 2. If I ifdown then ifup the IPoIB interface the IB network starts working again 3. Only seems to happen if server is under heavy load from ( lots of IO and/or...
  11. E

    DRBD9 live migration problems

    I was surprised how much slower DRBD9 is compared to 8.x without tuning. I've only been using DRBD9 for about a week and applying some of these settings did manage to cause drbdmanage to stop working temporally due to some hung 'drbdadm adjust' processes. I had to reboot some nodes to get...
  12. E

    DRBD9 live migration problems

    So it is perfectly acceptable to create multiple drbdmanage clusters within a single Proxmox cluster? Provided one also sets up the proper node restrictions in storage.cfg. I did have a chance to look at the Proxmox code and it seems proper. The only thought I have is dd opening the device for...
  13. E

    DRBD9 live migration problems

    Running in diskless mode could negatively impact read performance. An example might be a read IO heavy database VM, thats the sort of VM I would not want to accidentally live migrate to a diskless node. If I specify a DRBD storage limited to only two nodes of a 5 node DRBD cluster will Proxmox...
  14. E

    DRBD9 live migration problems

    Seems like there is a race condition between DRBD making the volume available and KVM trying to use it. I've not had time to look at the Proxmox code yet so I'm not really sure what Proxmox is trying to do. Is Proxmox asking DRBD to do anything or does it simply assume that the volume is there...
  15. E

    DRBD9 live migration problems

    I've setup a 3 node DRBD cluster server names vm1, vm2 and vm3. I created DRBD storage with replication set to 2: drbd: drbd2 redundancy 2 content images,rootdir I created a DRBD disk for VM 110, The disk is created and is using servers VM1 and VM2...
  16. E

    PVE 4.1, DRBD9 - multiple vgs/lvs support

    I believe that Proxmox simply passes the redundancy value to drbdmanage. So what nodes get used is up to drbdmanage and how its configured. http://drbd.linbit.com/en/doc/users-guide-90/s-dm-new-volume Default is to use nodes with the most free space
  17. E

    Backup from CEPH storage to NFS storage very slow

    As best I can tell the problem is with the kvm-qemu live backup code. The live backup reads very small amounts of data at a time, something like 64k I think. Then each object in CEPH is 4M if I remember correctly. So what happens is the backup code reads the same CEPH object 64 times, with each...
  18. E

    PVE 4.1, DRBD9 - multiple vgs/lvs support

    I've just started using DRBD9 and Proxmox 4.1 this week. AFAIK drbdmanage does not support multiple storage pools. Each node can have a single storage plugin that points to a single pool and currently there are only three storage plugins...
  19. E

    [SOLVED] DRBD9: how to configure multiple volumes?

    See: https://forum.proxmox.com/threads/drbd9-multiple-volume-groups.24019/ Also, I've not tested the idea and its not for a novice, but this might work: Setup DRBD9 according to: https://pve.proxmox.com/wiki/DRBD9 but with two differences: 1. Put both SAS and SATA arrays in the same drbdpool...
  20. E

    Best Practice for NUMA?

    I have a few dual socket servers and want to know how best to configure VMs. Should VMs always have NUMA enabled and have CPU sockets set to the number of physical sockets? Some VMs only need a single socket and single core, these should have NUMA enabled too? Some VMs only need two cores, is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!