Search results

  1. W

    [TUTORIAL] Fix always high CPU frequency in proxmox host.

    @spirit But how could frequency increase if we disable p-states and pin frequency of all the cores to maximum within given TDP?
  2. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    I would say that KSM is out of scope. I'm facing this issue even with 1 VM per node (2 sockets board) with almost all cores assign to VM. On such nodes I disable KSM as service (it's useless on such setup)
  3. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    The difference is that ICMP echo reply spikes become packet drops and micro freezes become RDS session disconnects/reconnects In other words with mitigations=on the problem is more representative
  4. W

    [TUTORIAL] Fix always high CPU frequency in proxmox host.

    Many thanks for clarification. Is there anything alike for Intel?)
  5. W

    [TUTORIAL] Fix always high CPU frequency in proxmox host.

    @spirit Thanks for sharing! Have you performed some tests and why did you decide to disable pstate and fix frequency at max? (comparing to rely on power management and frequency boost)
  6. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Correct. Some data from that node below mitigations=on, numa_balancing=1, KSM active, RDP disconnects / ICMP packets drop root@pve-node-04348:~# numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53...
  7. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    I was able to rerun test with mitigations=on and can confirm it has huge impact. With mitigations=on i see icmp packet loss and rdp disconnects rather than icmp reply time increase and micro freezes However disabling numa balancing helps even with mitigations=on
  8. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Unfortunately, there is no difference with 6.5 kernel. Just tested Still stay with mitigations=off and numa_balancing disabled P.S. will try to rerun the latest tests with mitigations enabled (but it's gonna be very painful if it gets worse)
  9. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Same node, same load but numa_balancing disabled mitigations=off, numa_balancing=0, KSM active, no RDP freezes root@pve-node-03486:~# echo 0 > /proc/sys/kernel/numa_balancing root@pve-node-03486:~# ./strace.sh 901 strace: Process 2896 attached strace: Process 2896 detached % time seconds...
  10. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    mitigations=off, numa_balancing=1, KSM active, RDP freezes root@pve-node-03486:~# cat /proc/sys/kernel/numa_balancing 1 root@pve-node-03486:~# ./strace.sh 901 strace: Process 2896 attached strace: Process 2896 detached % time seconds usecs/call calls errors syscall ------...
  11. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Here we go: Node: CPU(s) 48 x Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (2 Sockets) Kernel Version Linux 6.5.11-6-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.11-6 (2023-11-29T08:32Z) RAM: 8*32=256 DDR3 (distributed equally - 4*32 per socket) PVE Manager Version pve-manager/8.1.3/b46aac3b42da5d15 10...
  12. W

    [SOLVED] PVE on SATADOM

    Totally agreed. Satadoms on supermicro are nightmare. Try to avoid it
  13. W

    8.1 Update Hung/Failed

    This didnt help. What I have done: 1. Fixed a line in code (removed pvescheduller restart), thanks to advice from previous page 2. Fix corosync conf desynchronization (by coping new version of corosync.conf to buggy node) 3. Reconfigure by dpkg 4. Restart daemons or reboot
  14. W

    8.1 Update Hung/Failed

    Well I figured it out: somehow corosync.conf on this node became different (versions differ) with compare to other nodes (100% sure cluster was healthy and this node had been a member when I started an upgrade). After coping corosync.conf file from another node and restarting pvedaemon -...
  15. W

    8.1 Update Hung/Failed

    Upgrade process stacks on setting up pve-manage 8.1.3 if I breaks it (CTRL+C) I see root@pve2:~# pvecm nodes Cannot initialize CMAP service root@pve2:~# service pve-cluster status ● pve-cluster.service - The Proxmox VE cluster filesystem Loaded: loaded...
  16. W

    8.1 Update Hung/Failed

    Im facing the same issue on one of my nodes. ast login: Wed Dec 6 18:21:35 2023 root@pve2:~# fuser -v /var/run/pvescheduler.pid.lock Specified filename /var/run/pvescheduler.pid.lock does not exist. root@pve2:~# /usr/bin/pvescheduler stop root@pve2:~# fuser -v /var/run/pvescheduler.pid.lock...
  17. W

    PVE 8 Upgrade: Kernel 6.2.16-*-pve causing consistent instability not present on 5.15.*-pve

    You can follow this thread: https://forum.proxmox.com/threads/proxmox-8-0-kernel-6-2-x-100-cpu-issue-with-windows-server-2019-vms.130727/post-601617