Recent content by cliffpercent

  1. C

    Long-time offline of a node

    I have a node with hardware failure, its resurrection has been deferred repeatedly now (soon a year). HA votes are set to 0, quorum and capacity is OK with the remaining odd number of nodes. The rest of the nodes have been receiving updates as usual. Are there any gotchas for keeping a node...
  2. C

    HA non-strict negative resource affinity

    Distributed anti-affinity: https://bugzilla.proxmox.com/show_bug.cgi?id=7115
  3. C

    HA non-strict negative resource affinity

    I can no longer replicate the ignoring of manual migration request, as it clearly states conflict with strict negative affinity — as you detailed. Maybe it was fixed with an update to PVE or propagation to the browser UI. I think it could be more clear: - The UI affinity rule builder has 'keep...
  4. C

    HA non-strict negative resource affinity

    HA node affinity offers the strict option to specify whether a resource requires the condition, or (not strict) prefers it. The new HA resource affinities, especially negative, should offer it as well. With an example load of 3 VMs running the same application, there is preference (not...
  5. C

    Assigning cores to CEPH

    PVE wiki details you should assign CPU cores to CEPH: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster It doesn't detail on how it (excluding 25% of the cores from VMs) should be done. How do you run your systems?
  6. C

    Proxmox VE 8.3 released!

    It seems like the upgrade broke oidc login. GUI: `OpenID redirect failed. Connection error - server offline?`, browser console gives 596 for `https://<instance>/api2/extjs/access/openid/auth-url`. No logs on oidc provider side. The provider reachability looks fine from the terminal. No (journal)...
  7. C

    Live migrations failing seemingly randomly over many months and versions. Device or resource busy.

    The usual failure looks like: ... 2021-08-03 15:24:47 migration active, transferred 32.6 GiB of 32.0 GiB VM-state, 120.2 MiB/s 2021-08-03 15:24:48 migration active, transferred 32.7 GiB of 32.0 GiB VM-state, 117.0 MiB/s 2021-08-03 15:24:49 migration active, transferred 32.8 GiB of 32.0 GiB...
  8. C

    WARNING: unable to connect to VM 100 socket - timeout after 31 retries

    Necro, here as well. 3-node cluster. All nodes are on the same versions. VM storage is from NFS mount from another server, with ZFS. The issue occurred only on node 1. Only elevated stopping worked. When restarting, a VM hit the state of no reaction (CPU, mem usage stats load longer than...