Search results

  1. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    @fweber Could u please clarify is there any relations between kernel/numa_balancing and "Enable NUMA" in VM config? Thanks in advance
  2. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    I can confirm that setting solves ICMP echo reply time increase and RDP freezes almost immediately even with KSM enabled and 6.2 kernel (mitigations still off) root@046-pve-04315:~# uname -a Linux 046-pve-04315 6.2.16-15-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-15 (2023-09-28T13:53Z) x86_64...
  3. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    from my perspective it’s much more correct to test it on the devs side (if Im not mistaken - they manage to reproduce the issue) rather than update my clients’ server to the test kernel and be punished afterwards
  4. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    @fweber any updates? p.s. just want to clarify: these ICMP ping response time values are not just values in the shell. They are RDP session freezes. Yes, very short but quite annoying. It's enough for the end user to pluck the mouse cursor, miss the button (for example) and start complain when...
  5. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    @fweber @fiona To get the data you requested i had to negotiate with my users to suffer the freezes and downgrading performance. So any feedbacks would be very appreciated!
  6. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Interesting thing is that the problem occurs more often when >24 vCPU and more than 96Gb of RAM added to VM VMs with small amount of vCPUs and memory (like 4 vCPUs + 16Gb vRAM) are not affected
  7. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Our largest cluster is built on PVE 7.x with 5.15 kernel. It works smoothly and well. I just updated one of its nodes to PVE8 and kernel 6.2 just to check - got the same issue with KSM and CPU spikes (which slightly freezed VM - I can see it as ICMP echo reply time increase). All my debug data...
  8. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Met on different clusters built on different server vendors: HP gen8/9, Dell R730xd, Supermicro Common is only one thing: intel-microcodes installed on every host)
  9. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    12 hours latter Ответ от 172.16.9.242: число байт=32 время=1мс TTL=124 Ответ от 172.16.9.242: число байт=32 время=1мс TTL=124 Ответ от 172.16.9.242: число байт=32 время=672мс TTL=124 Ответ от 172.16.9.242: число байт=32 время=1мс TTL=124 Ответ от 172.16.9.242: число байт=32 время=1мс TTL=124...
  10. W

    CPU soft lockup: Watchdog: Bug: soft lockup - CPU#0 stock for 24s!

    Take a look at this thread: https://forum.proxmox.com/threads/proxmox-8-0-kernel-6-2-x-100-cpu-issue-with-windows-server-2019-vms.130727/page-6 There are 2 posts from the dev team what to do to help them in investigating the problem
  11. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Here we go mitigations=off, KSM enabled but not active so far root@pve-node-03486:~# ./strace.sh 901 strace: Process 5349 attached strace: Process 5349 detached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 97.81...
  12. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Yes, this is correct output of the command you provided earlier root@pve-node-03486:~# cat ./ctrace.sh #/bin/bash VMID=$1 PID=$(cat /var/run/qemu-server/$VMID.pid) timeout 5 strace -c -p $PID grep '' /sys/fs/cgroup/qemu.slice/$VMID.scope/*.pressure for _ in {1..5}; do grep ''...
  13. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Here we go... root@pve-node-03486:~# uname -a Linux pve-node-03486 6.2.16-16-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-16 (2023-10-03T05:42Z) x86_64 GNU/Linux mitigations=off, KSM disabled (booted without KSM at all) root@pve-node-03486:~# ./ctrace.sh 901 strace: Process 5349 attached strace...
  14. W

    Proxmox VE 8.0 released!

    First of all: it's not supported and we use it on our own risk However you should really do your best to get split brain) a) PVE cluster controls that VM config file can be assign to only one node of cluster b) No talks about HA in such setup c) Don't set auto start VMs
  15. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Two small notes here: 1) cpu spikes and ICMP lost appear when dozen number of RDS 2019 users log in (5-10 and more) If I just boot up fresh Windows Server 2019 without RDS and work load neither CPU spikes nor ICMP reply lost observed 2) I used to install intel-microcodes on all my nodes
  16. W

    Proxmox VE 8.0 released!

    Just set number of votes = 2 to main node in cluster conf file. We use this in setups with main-backup nodes and zfs replication. In case main node fails u will have to modify cluster config to increase number of votes for second node manually
  17. W

    PVE 8 Upgrade: Kernel 6.2.16-*-pve causing consistent instability not present on 5.15.*-pve

    Unfortunately, PVE devs keep rejecting the requests to release 5.15 opt kernel for PVE8 even-though number of negative feedbacks on 6.2 kernel increase constantly
  18. W

    KSM Memory sharing not working as expected on 6.2.x kernel

    Why? Quite simple... I used to use KSM in my environment and there are lots of improvements in ZFS from version 2.1.11 (from version you mentioned). As was mentioned above, KSM is one of the key feature and from my perspective 8.x can not not be production ready without this be fully working...
  19. W

    KSM Memory sharing not working as expected on 6.2.x kernel

    Do you mind to compile it with latest zfs? And release as separate deb package?