Search results

  1. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Here we go mitigations=off, KSM enabled but not active so far root@pve-node-03486:~# ./strace.sh 901 strace: Process 5349 attached strace: Process 5349 detached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 97.81...
  2. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Yes, this is correct output of the command you provided earlier root@pve-node-03486:~# cat ./ctrace.sh #/bin/bash VMID=$1 PID=$(cat /var/run/qemu-server/$VMID.pid) timeout 5 strace -c -p $PID grep '' /sys/fs/cgroup/qemu.slice/$VMID.scope/*.pressure for _ in {1..5}; do grep ''...
  3. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Here we go... root@pve-node-03486:~# uname -a Linux pve-node-03486 6.2.16-16-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-16 (2023-10-03T05:42Z) x86_64 GNU/Linux mitigations=off, KSM disabled (booted without KSM at all) root@pve-node-03486:~# ./ctrace.sh 901 strace: Process 5349 attached strace...
  4. W

    Proxmox VE 8.0 released!

    First of all: it's not supported and we use it on our own risk However you should really do your best to get split brain) a) PVE cluster controls that VM config file can be assign to only one node of cluster b) No talks about HA in such setup c) Don't set auto start VMs
  5. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Two small notes here: 1) cpu spikes and ICMP lost appear when dozen number of RDS 2019 users log in (5-10 and more) If I just boot up fresh Windows Server 2019 without RDS and work load neither CPU spikes nor ICMP reply lost observed 2) I used to install intel-microcodes on all my nodes
  6. W

    Proxmox VE 8.0 released!

    Just set number of votes = 2 to main node in cluster conf file. We use this in setups with main-backup nodes and zfs replication. In case main node fails u will have to modify cluster config to increase number of votes for second node manually
  7. W

    PVE 8 Upgrade: Kernel 6.2.16-*-pve causing consistent instability not present on 5.15.*-pve

    Unfortunately, PVE devs keep rejecting the requests to release 5.15 opt kernel for PVE8 even-though number of negative feedbacks on 6.2 kernel increase constantly
  8. W

    KSM Memory sharing not working as expected on 6.2.x kernel

    Why? Quite simple... I used to use KSM in my environment and there are lots of improvements in ZFS from version 2.1.11 (from version you mentioned). As was mentioned above, KSM is one of the key feature and from my perspective 8.x can not not be production ready without this be fully working...
  9. W

    KSM Memory sharing not working as expected on 6.2.x kernel

    Do you mind to compile it with latest zfs? And release as separate deb package?
  10. W

    KSM Memory sharing not working as expected on 6.2.x kernel

    @aaron any news? by the way, is there any change to build 5.15 kernel for PVE 8 for testing or build 6.2 with patch applied?
  11. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    One more observation: Disabling KSM requires rebooting the host because stopping KSM service (and/or disabling it) does not really stop it. ksmd becomes ghost and cannot be killed (I didnt manage to kill it) This screen shows: kernel with mitigations=off and ksdtuned stopped and disabled (but...
  12. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Host1 - problems CPU(s) 56 x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (2 Sockets) Kernel Version Linux 6.2.16-14-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-14 (2023-09-19T08:17Z) Intel-microcodes installed VM: W2019 (96GB RAM + 42 cores, NUMA) Host2 - problems CPU(s) 48 x Intel(R)...
  13. W

    global mail backup in PMG

    Is there any plans to integrate full mail backup/restore in PGM? Something like: https://www.mailpiler.org/
  14. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Could you try to continuously ping your VM (from any other device in network) like gateway? Huge ping delay spikes are common for the topic issue. If you don't have any then probably there is something else interfere P.S. Do not ping from another VM on the same host P.S.S In my case before...
  15. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Well, now it looks reasonable Next, what about KSM? Have you disabled it?
  16. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    If I am not mistaken mitigations=off is not applied What filesystem behind your root partition? In my case (zfs) i had to edit cmdfile file Check this post: https://forum.proxmox.com/threads/disable-spectre-meltdown-mitigations.112553/post-485863
  17. W

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    If on your systems KSM is in use then try to disable it with "systemctl disable --now ksmtuned" on top of and cross-check that mitigations are really off with lscpu
  18. W

    Proxmox VE 8.0 released!

    @t.lamprecht Is there any progress on sorting out the degradation in KSM and huge performance (CPU 100% spikes) loss in Windows guests with 6.2 kernels?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!