That article has been updated yesterday to include that information, as you can see in the history: https://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_6.x_to_7.0&action=history
And did you follow the guide?After 2 weeks, my Windows Server 2019 VM was shutdown again
Code:proxmox-ve: 7.2-1 (running kernel: 5.15.35-2-pve) pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc) pve-kernel-5.15: 7.2-4 pve-kernel-helper: 7.2-4 pve-kernel-5.13: 7.1-9 pve-kernel-5.11: 7.0-10 pve-kernel-5.15.35-2-pve: 5.15.35-5 pve-kernel-5.15.35-1-pve: 5.15.35-3 pve-kernel-5.15.30-2-pve: 5.15.30-3 pve-kernel-5.13.19-6-pve: 5.13.19-15 pve-kernel-5.13.19-2-pve: 5.13.19-4 pve-kernel-5.11.22-7-pve: 5.11.22-12 pve-kernel-5.11.22-4-pve: 5.11.22-9 ceph-fuse: 15.2.14-pve1 corosync: 3.1.5-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.24-pve1 libproxmox-acme-perl: 1.4.2 libproxmox-backup-qemu0: 1.3.1-1 libpve-access-control: 7.2-2 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.2-2 libpve-guest-common-perl: 4.1-2 libpve-http-server-perl: 4.1-2 libpve-storage-perl: 7.2-4 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.12-1 lxcfs: 4.0.12-pve1 novnc-pve: 1.3.0-3 proxmox-backup-client: 2.2.3-1 proxmox-backup-file-restore: 2.2.3-1 proxmox-mini-journalreader: 1.3-1 proxmox-widget-toolkit: 3.5.1 pve-cluster: 7.2-1 pve-container: 4.2-1 pve-docs: 7.2-2 pve-edk2-firmware: 3.20210831-2 pve-firewall: 4.2-5 pve-firmware: 3.4-2 pve-ha-manager: 3.3-4 pve-i18n: 2.7-2 pve-qemu-kvm: 6.2.0-10 pve-xtermjs: 4.16.0-1 qemu-server: 7.2-3 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.7.1~bpo11+1 vncterm: 1.7-1 zfsutils-linux: 2.1.4-pve1
@Stoiko IvanovGiving it a go... Unpinned kernel5.13.19-6-pve
and setkvm.tdp_mmu=N
.
I don't have a reliable way to recreate the issue, but my Windows 11 VM would usually dump on me within a few days time. I'll report back!
Can confirm newest Kernel and kvm.tdp_mmu=N for older CPUs finally helps....
I think the statement about older CPUs is misleading. I have 11th gen Intel host with this issue, yet another system with 9th gen Intel is fine.What's considered an older CPU? Also out of curiosity what does that setting do?
I think the statement about older CPUs is misleading. I have 11th gen Intel host with this issue, yet another system with 9th gen Intel is fine.
kvm.tdp_mmu=N seems to have helped though. It's been 2 days now.
Somehow CPU Usage went up a little though, after applying kvm.tdp_mmu=N . Not sure if it's related, but it happened right at the time when I went back to 5.15* kernel from 5.13 and applied kvm.tdp_mmu=N workaround
View attachment 38790
What's considered an older CPU? Also out of curiosity what does that setting do?
What a weird setting, surely you wouldn't want your windows VMs to crashThis setting seems to prevent Windows Server 2022 from randomly crashing within Proxmox 7.0
I checked... and also see a slight bump in CPU usage. Nothing too drastic, just a few percent higher on average.Somehow CPU Usage went up a little though, after applying kvm.tdp_mmu=N
With lvm.tcp_mmu=? ??? Only new Kernel or something else?From kernel PVE 5.15.35-6 (Fri, 17 Jun 2022 13:42:35 +0200) no more crash happend
Only with new kernel.With lvm.tcp_mmu=? ??? Only new Kernel or something else?
Still stable.... so it looks this or PVE 5.15.35-6 (Fri, 17 Jun 2022 13:42:35 +0200) finally fixes this.....Can confirm newest Kernel and kvm.tdp_mmu=N for older CPUs finally helps....