Search results for query: affinity

  1. G

    Proxmox Virtual Environment 9.0 released!

    Amazinh news... Just ahead Debian 13 release... Go go Proxmox...
  2. t.lamprecht

    Proxmox Virtual Environment 9.0 released!

    ...storage. This includes iSCSI and Fibre Channel-attached SANs. High-Availability (HA) rules for resource-to-node and resource-to-resource affinity Fabrics for the Software-Defined Networking (SDN) stack Modernized mobile web interface written in the rust programming language using the Yew web...
  3. t.lamprecht

    Proxmox Virtual Environment 9.0 released!

    ...storage. This includes iSCSI and Fibre Channel-attached SANs. High-Availability (HA) rules for resource-to-node and resource-to-resource affinity Fabrics for the Software-Defined Networking (SDN) stack Modernized mobile web interface written in the rust programming language using the Yew web...
  4. D

    VM Migration fails with "only root can set 'affinity' config"

    After proxmox upgrade all of my changes were overwritten, so now I know that `pvedaemon` restart is required
  5. S

    Ceph PG quantity - calculator vs autoscaler vs docs

    ...be replaced eventually. It's probably easier to leave them, otherwise I'd want more storage in those nodes anyway. The HDDs have primary affinity set off/zero. Unless wear is dramatically higher in Ceph than Virtuozzo I'm not worried about that based on the last 10 years of usage and the...
  6. leesteken

    LXC CPU pinning

    ...give the VM 12 cores and set the the CPU Limit to 2? Alternatively, you can give it 12 cores but have only 2 enabled (and 10 disabled) by setting VCPUs to 2. Either way you can set the affinity for the 12 cores. I don't really see a use case for this. Can you explain what you want to achieve?
  7. X

    LXC CPU pinning

    sorry, I think you misunderstood me or I expressed it incorrectly. I know that there is exactly this setting for cpu-affinity in kvm, but this works like: - kvm has 2 cores - affinity is set to 0-11 => kvm selects 2 cores out of the set from 0-11 lxc.cgroup2.cpuset.cpus works this way: - lxc...
  8. leesteken

    LXC CPU pinning

    You can set this for VMs too but in a different way or via the Proxmox web GUI. See the affinity section here: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_cpu_resource_limits
  9. X

    LXC CPU pinning

    Is there a way to set this as "affinity" (like for kvm / qemu) ? So if I set cores: 2 and also set lxc.cgroup2.cpuset.cpus: 0-11 (because of little-bit endian architecture), the lxc sees all 12 cores instead of only two (from the pool of 0-11 / 1-12).
  10. S

    Ceph on 3-node full mesh can not add OSD's

    ...mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray 2025-06-24T15:55:21.829+0200 713c7802d6c0 1 osd.1 216 set_numa_affinity storage numa node 0 2025-06-24T15:55:21.829+0200 713c7802d6c0 -1 osd.1 216 set_numa_affinity unable to identify public interface '' numa node: (2) No...
  11. Q

    Immich High CPU and Swap Usage, Cannot Reach Web UI

    ...of 1.8 in my Docker file, and finding an odd openvino issue where i had to add a manual ENV variable to avoid an error being thrown for pset affinity. But, I'm still losing web UI access after a minute, and CPU and Swap go right back up to 100% with plenty of RAM leftover. None of the...
  12. C

    FSID CLUSTER CEPH

    ...0 osd fsid a2b1d2a1-5a30-4675-83ef-1bdcd9a98f72 osd id 0 osdspec affinity type block vdo 0 devices /dev/sdh ====== osd.14 ====== [block]...
  13. S

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    > for read, it's really russian roulette For read, by default it’s random but affinity is set 0 on the HDDs per my post above, so now all SSD reads. I’m not saying you’re wrong with the rest, just that I haven’t seen this thread’s warning in a few weeks (both before and after that change...
  14. spirit

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    you have mixed ssd and hdd in the same pool ???
  15. S

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    One possibly related note, especially for those with multiple OSD classes, we set our few remaining HDDs to primary-affinity 0, so the primary read would always be from an SSD. View: ceph osd tree Set: ceph osd primary-affinity osd.12 0
  16. D

    VM Migration fails with "only root can set 'affinity' config"

    ...Just edit file /usr/share/perl5/PVE/API2/Qemu.pm on your node and add 'affinity' => 1, in my $cpuoptions section I don't know which pve daemon should be restarted after this (not only pveproxy - that's all I know), but this thing worked for me after reboot I also know that the same issue...
  17. U

    VM Migration fails with "only root can set 'affinity' config"

    I ran into a somehow similar issue when migrating a VM, albeit not with the affinity setting but with a very special set of low level qemu args. I guess it boils down to the same privilege issue? 2025-05-29 11:00:32 ERROR: error - tunnel command...
  18. G

    Any update on Drs solution!

    Not yet, but maybe the ProxLB project might be a look worth in the meantime: https://github.com/gyptazy/ProxLB This also features the support of affinity and anti-affinity rules, ignore options on guest and nodes level and several other features.
  19. A

    IO load interference between different storage pools

    ...see two potential workarounds. Throttle the HDD VMs by limiting IOPS and bandwidth to a level that prevents noticeable IO delay. Or use CPU affinity to isolate SSD-only and HDD-using VMs, hoping that separating their CPU resources reduces contention. That said, I’m not happy with either...
  20. L

    INTEL ALDER LAKE I5-12500 / UHD 770 passthrough - the dreaded code 43 error

    affinity: 14-17 agent: 1 args: -set device.hostpci0.addr=02.0 -set device.hostpci0.x-igd-gms=0x2 -set device.hostpci0.x-igd-opregion=on bios: ovmf boot: order=scsi0;ide0 cores: 4 cpu: host efidisk0: local-zfs:vm-101-disk-0,efitype=4m,size=1M hostpci0: 0000:00:02,romfile=gen12_igd.rom hostpci1...