Search results for query: affinity

  1. dakralex

    HA non-strict negative resource affinity

    Right, that could definitely be improved by using "Keep Together (positive)" and "Keep Separate (negative)" in the web interface or using only the "positive" and "negative" names there too. I'm not sure about including the rule name/description here, because the former isn't shown in the web...
  2. dakralex

    HA affinity problem (?)

    From the old resources.cfg from Nov 12 it seems that the HA groups were never fully migrated. Was the ` at the end an artifact of embedding it as code int he forum or was that part of the file? Either way, great to hear that your problem has been solved!
  3. F

    HA affinity problem (?)

    Daniel, l removed the "resources.cfg" and reconfigure HA (no rules change) Now no affinity problem the VMs with affinity "positive" are on the same node pve3. Thank you for your help/time Best regards. Francis
  4. F

    HA affinity problem (?)

    Hello Daniel, Normally the file resources.cfg is updated (Nov 12)? The file resources.cfg is and old file ??? with the 9.x I do not have pve*, server1 and server2 groups The package "pve-ha-manager" reinstalled on all nodes same problem and no resources.cfg updates for all nodes same...
  5. F

    HA affinity problem (?)

    Daniel, The command "Ha status" after the test sorry.
  6. dakralex

    HA affinity problem (?)

    ...next-fit fashion with the basic scheduler), and as vm:110 will follow suit with vm:104 to pve3 as these are in a positive resource affinity rule. As the node affinity rule is non-strict, it will fallback to {pve2, pve3} as the possible nodes for all three. If it were strict, all HA resources...
  7. F

    HA affinity problem (?)

    Hi Marcus, Probably you have the "chance" that vm101+102 migrate on node3 ??? Is there a way to debug HA "affinity" ? Best regards. Francis
  8. S

    Anyone managed to passthrough the onboard audio of a WRX 80 Creator to a windows vm?

    qm config 107 affinity: 0-7,16-23 agent: 1 args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off' balloon: 0 bios: ovmf boot: order=scsi0;net0 cores: 16 cpu: EPYC-Milan-v2,hidden=1 cpuunits: 2000 efidisk0: b-zssd-disks:vm-107-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M...
  9. W

    HA affinity problem (?)

    Hi Francis, it is still working as expected with 3 guests here on my side. I have node affinity rule "node1" for id 100, 101, 102 and 2 of them with positive affinity rule id 101,102. Config looks like yours but different ids and hostnames. All 3 guests running on node1 node1 maintenance...
  10. F

    HA affinity problem (?)

    Hello Marcus, Thank you l have this Best regards. Francis
  11. W

    HA affinity problem (?)

    ...are migrated together to node2 when node1 is set to maintenance. Here are my rules: root@node1:~# cat /etc/pve/ha/rules.cfg resource-affinity: ha-rule-91edaaa7-807b affinity positive resources vm:100,vm:102 node-affinity: ha-rule-e9b7994a-ecf1 nodes node1...
  12. F

    HA affinity problem (?)

    Hello, I have a problem with the HA affinity with PVE 9.1.2, I have two VMs with the resource affinity "Keep Together" and the node affinity "pve1". When I put the "pve1" in maintenance, one VM migrate to "pve2" and the other to "pve3", not on the same node. Best regards. Francis
  13. M

    if reboot is triggered pve node goes away too fast before ha migration is finished

    ...state to "started" and the VM starts. To avoid the problem, this preparation before host reboot does the trick: Change "Datacenter -> HA -> Affinity rules" to force the migration of the VMs to another host. When migration has completed, shutdown/restart of the host works fine. I hope this...
  14. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    I manually assigned my LXCs to certain CCDs. Gemini wrote this code. (Have a 4 CCD TR now, upgraded from when I originally posted this)
  15. S

    How to fail a looping HA VM migration?

    For future reference, probably better/easier to enable maintenance mode: ha-manager crm-command node-maintenance enable pve1 (wait, then update+reboot) ha-manager crm-command node-maintenance disable pve1
  16. M

    Problem with GPU Passthrough (AMD RX 9060 XT)

    ...map it, I thought that maybe the IOMMU stuff was interfering with something, no idea that is happening, here is my VM configuration file: affinity: 0-7,16-23 agent: 1 balloon: 0 bios: ovmf boot: order=scsi0 cores: 16 cpu: host efidisk0...
  17. M

    [SOLVED] kaputtes CEPH nach Upgrade auf PROXMOX 9.1.2

    ...0 osd fsid c908f817-89da-4f73-b58e-25a03b8a064b osd id 0 osdspec affinity type block vdo 0 devices /dev/nvme0n1 ceph-volume lvm activate --all Running...
  18. S

    Proper OSD replacement procedure

    Also the Weight is reset.
  19. S

    Proper OSD replacement procedure

    ...number starting at 0. I haven't changed the CRUSH map so haven't been concerned about change retention. I did check that "primary affinity" is (by default) enabled for the new OSD. I think the question is, does Proxmox use ceph osd destroy or ceph osd purge when doing a Destroy? Without...
  20. aj@root

    How to fail a looping HA VM migration?

    ...I'm not certain the vmbr0 networking came up as expected, nor am I certain that it could be accessed from the webui) I switched the affinity of VMs to pve2 (in preparation for updating pve1), and they began to migrate. The first 4 appeared on pve2 as if everything was going fine. However...