Win11 VM opening many tabs at once crashes proxmox host

Snuupy

Member
Sep 8, 2021
17
4
8
Edits:

Edit 1: I tried installing a new Windows VM following https://pve.proxmox.com/wiki/Windows_10_guest_best_practices and loaded a bunch of tabs in librewolf. No problem, no crashing. It now crashes! I tried removing the network driver and reinstalling it in my old (baremetal -> proxmox) VM, load all the tabs in browser, crashes host again. This is so weird.

Edit 2: I ran memtest86+, no ram issues. Screen shows "pass".

Edit 3: The newly created VM with the old librewolf profile crashes. Looks like it's reproducible!

Hi,

Proxmox newbie here. This is a brand new proxmox install (latest). I migrated a Win11 VM from baremetal (vhdx) and imported it into proxmox. My existing Librewolf installation has many (400+) tabs open, and when I open librewolf on the VM and the browser tries to load all the tabs, the proxmox HOST (not VM) will crash and instantly reboot. I have had this happen every single time and it is reproducible every time I open the browser. You can speed up the crash by cycling through the tabs that it is trying to open (ctrl-tab repeatedly). I tried to grab some logs from both journalctl and dmesg, but no (useful) logs show up. I have attached them regardless:

Code:
Jan 30 01:53:42 SnuUM780 dhcpcd[823]: fwln100i0: probing address 192.168.1.138/24
Jan 30 01:53:43 SnuUM780 kernel: kvm_amd: kvm [2449]: vcpu0, guest rIP: 0xfffff85ea54bad89 Unhandled WRMSR(0xc0010115) = 0x0 # This shows up on boot of the VM, and does not crash the system.
Jan 30 01:53:44 SnuUM780 kernel: kvm_amd: kvm [2449]: vcpu1, guest rIP: 0xfffff85ea54bad89 Unhandled WRMSR(0xc0010115) = 0x0
Jan 30 01:53:44 SnuUM780 kernel: kvm_amd: kvm [2449]: vcpu2, guest rIP: 0xfffff85ea54bad89 Unhandled WRMSR(0xc0010115) = 0x0
Jan 30 01:53:44 SnuUM780 dhcpcd[823]: fwpr100p0: probing for an IPv4LL address
Jan 30 01:53:44 SnuUM780 kernel: kvm_amd: kvm [2449]: vcpu3, guest rIP: 0xfffff85ea54bad89 Unhandled WRMSR(0xc0010115) = 0x0
Jan 30 01:53:44 SnuUM780 kernel: kvm_amd: kvm [2449]: vcpu4, guest rIP: 0xfffff85ea54bad89 Unhandled WRMSR(0xc0010115) = 0x0
Jan 30 01:53:44 SnuUM780 kernel: kvm_amd: kvm [2449]: vcpu5, guest rIP: 0xfffff85ea54bad89 Unhandled WRMSR(0xc0010115) = 0x0
Jan 30 01:53:44 SnuUM780 kernel: kvm_amd: kvm [2449]: vcpu6, guest rIP: 0xfffff85ea54bad89 Unhandled WRMSR(0xc0010115) = 0x0
Jan 30 01:53:44 SnuUM780 kernel: kvm_amd: kvm [2449]: vcpu7, guest rIP: 0xfffff85ea54bad89 Unhandled WRMSR(0xc0010115) = 0x0
Jan 30 01:53:47 SnuUM780 dhcpcd[823]: fwln100i0: leased 192.168.1.138 for 43200 seconds
Jan 30 01:53:47 SnuUM780 dhcpcd[823]: fwln100i0: adding route to 192.168.1.0/24
Jan 30 01:53:47 SnuUM780 dhcpcd[823]: fwln100i0: adding default route via 192.168.1.1
Jan 30 01:53:49 SnuUM780 dhcpcd[823]: fwpr100p0: using IPv4LL address 169.254.231.250
Jan 30 01:53:49 SnuUM780 dhcpcd[823]: fwpr100p0: adding route to 169.254.0.0/16
###### CRASH HAPPENS HERE when trying to load browser #####
-- Boot 0771c78d5d284cd9a97fcb75db813d07 --
Jan 30 01:54:47 SnuUM780 kernel: Linux version 6.5.11-7-pve (build@proxmox) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils f>
Jan 30 01:54:47 SnuUM780 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.5.11-7-pve root=/dev/mapper/pve-root ro quiet

In Proxmox console I can see:

Status: stopped: unable to read tail (got 0 bytes)

I tried both the virtio and Intel E1000 network adapter, both will crash. I also tried disabling the Qemu Guest Agent, that did not change anything either. I then tried to get a longer time to see any error messages on kernel panics by setting:

Code:
root@SnuUM780:~# nano /etc/sysctl.conf
root@SnuUM780:~# sysctl -p
kernel.panic = 20
root@SnuUM780:~# cat /proc/sys/kernel/panic
20

but after all that, the proxmox host still INSTANTLY reboots when I open my tabs on librewolf in the Win11 VM.

It smells to me like a network driver stack overflow type of thing, but I'm not sure where to start digging. As far as I understand, even if a VM crashes, it should not bring down the host. However, the host resets, consistently, every single time I try to load a bunch of browser tabs at once. Could I get some help please?

Code:
root@SnuUM780:/etc/pve# cat /etc/pve/local/qemu-server/100.conf
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide2: local:iso/virtio-win.iso,media=cdrom,size=612812K
machine: pc-i440fx-8.1
memory: 24576
meta: creation-qemu=8.1.2,ctime=1706436817
name: Win11
net0: virtio=BC:24:11:49:1E:63,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: local-lvm:vm-100-disk-2,discard=on,iothread=1,size=262164M,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=xxxxxxxxxxxxxxxxxxxxx
sockets: 1
tpmstate0: local-lvm:vm-100-disk-1,size=4M,version=v2.0
vga: virtio
vmgenid: xxxxxxxxxxxxxxxx

The host is a Minisforum UM780 (7840HS) with a RTL8125 ethernet adapter. It is connected to a 1 GbE switch running openwrt. lspci -vvvv shows:

01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)

Code:
root@SnuUM780:~/# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp1s0 inet manual
        metric 20

auto vmbr0
iface vmbr0 inet dhcp
        # address 192.168.1.198/24
        # gateway 192.168.1.1
        bridge-ports enp1s0
        bridge-stp off
        bridge-fd 0
        metric 10
# iface wlp2s0 inet manual

allow-hotplug wlp2s0
auto wlp2s0
iface wlp2s0 inet dhcp
        wireless-essid xxxxxxxxxxxxxxx
        wpa-psk xxxxxxxxxxxxxxxxxxxxxxxxxxx
        metric 100

I have disabled PSS in BIOS and I also enabled the https://github.com/joakimkistowski/amd-disable-c6 service to disable C6 power saving state because I heard it was an issue for these models.

Code:
root@SnuUM780:~# pveversion -v                                                                                          
proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)                                                                        
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)                                                            
proxmox-kernel-helper: 8.1.0                                                                                            
proxmox-kernel-6.5: 6.5.11-7                                                                                            
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7                                                                            
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4                                                                            
ceph-fuse: 17.2.7-pve2                                                                                                  
corosync: 3.1.7-pve3                                                                                                    
criu: 3.17.1-2                                                                                                          
glusterfs-client: 10.3-5                                                                                                
ifupdown2: 3.2.0-1+pmx8                                                                                                
ksm-control-daemon: 1.4-1                                                                                              
libjs-extjs: 7.0.0-4                                                                                                    
libknet1: 1.28-pve1                                                                                                    
libproxmox-acme-perl: 1.5.0                                                                                            
libproxmox-backup-qemu0: 1.4.1                                                                                          
libproxmox-rs-perl: 0.3.3                                                                                              
libpve-access-control: 8.0.7                                                                                            
libpve-apiclient-perl: 3.3.1                                                                                            
libpve-common-perl: 8.1.0                                                                                              
libpve-guest-common-perl: 5.0.6                                                                                        
libpve-http-server-perl: 5.0.5                                                                                          
libpve-network-perl: 0.9.5                                                                                              
libpve-rs-perl: 0.8.8                                                                                                  
libpve-storage-perl: 8.0.5                                                                                              
libspice-server1: 0.15.1-1                                                                                              
lvm2: 2.03.16-2                                                                                                        
lxc-pve: 5.0.2-4                                                                                                        
lxcfs: 5.0.3-pve4                                                                                                      
novnc-pve: 1.4.0-3                                                                                                      
proxmox-backup-client: 3.1.2-1                                                                                          
proxmox-backup-file-restore: 3.1.2-1                                                                                    
proxmox-kernel-helper: 8.1.0                                                                                            
proxmox-mail-forward: 0.2.3                                                                                            
proxmox-mini-journalreader: 1.4.0                                                                                      
proxmox-offline-mirror-helper: 0.6.4                                                                                    
proxmox-widget-toolkit: 4.1.3                                                                                          
pve-cluster: 8.0.5                                                                                                      
pve-container: 5.0.8                                                                                                    
pve-docs: 8.1.3                                                                                                        
pve-edk2-firmware: 4.2023.08-3                                                                                          
pve-firewall: 5.0.3                                                                                                    
pve-firmware: 3.9-1                                                                                                    
pve-ha-manager: 4.0.3                                                                                                  
pve-i18n: 3.2.0                                                                                                        
pve-qemu-kvm: 8.1.2-6                                                                                                  
pve-xtermjs: 5.3.0-3                                                                                                    
qemu-server: 8.0.10                                                                                                    
smartmontools: 7.3-pve1                                                                                                
spiceterm: 3.3.0                                                                                                        
swtpm: 0.8.0+pve1                                                                                                      
vncterm: 1.8.0                                                                                                          
zfsutils-linux: 2.2.2-pve1

and dmesg: https://0bin.xyz/AA3NXGFQFDN44SKXXULKJOYSDE

I did try to install all the virtio guest drivers for windows in the guest, but some of them didn't install because of some certificate issues from Red Hat(?) Although the virtio driver for storage and networking seems to both be installed properly.
 
Last edited:
  • Like
Reactions: RedKage
A 2nd journalctl after a crash on the new VM:

Code:
root@SnuUM780:~# journalctl -xeb-1
Jan 30 05:30:32 SnuUM780 kernel: fwln101i0: entered allmulticast mode
Jan 30 05:30:32 SnuUM780 kernel: fwln101i0: entered promiscuous mode
Jan 30 05:30:32 SnuUM780 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jan 30 05:30:32 SnuUM780 kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Jan 30 05:30:32 SnuUM780 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jan 30 05:30:32 SnuUM780 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Jan 30 05:30:32 SnuUM780 kernel: tap101i0: entered allmulticast mode
Jan 30 05:30:32 SnuUM780 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jan 30 05:30:32 SnuUM780 kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Jan 30 05:30:32 SnuUM780 dhcpcd[848]: fwpr101p0: soliciting an IPv6 router
Jan 30 05:30:32 SnuUM780 pvedaemon[1336]: <root@pam> end task UPID:SnuUM780:000010E1:00018DB0:65B8F9F7:qmstart:101:root@pam: OK
Jan 30 05:30:32 SnuUM780 dhcpcd[848]: fwpr101p0: soliciting a DHCP lease
Jan 30 05:30:33 SnuUM780 pvedaemon[1335]: <root@pam> starting task UPID:SnuUM780:0000115B:00018E2A:65B8F9F9:vncproxy:101:root@pam:
Jan 30 05:30:33 SnuUM780 pvedaemon[4443]: starting vnc proxy UPID:SnuUM780:0000115B:00018E2A:65B8F9F9:vncproxy:101:root@pam:
Jan 30 05:30:33 SnuUM780 dhcpcd[848]: fwln101i0: soliciting an IPv6 router
Jan 30 05:30:33 SnuUM780 dhcpcd[848]: fwln101i0: rebinding lease of 192.168.1.233
Jan 30 05:30:34 SnuUM780 dhcpcd[848]: fwln101i0: NAK: address in use from 192.168.1.1
Jan 30 05:30:34 SnuUM780 dhcpcd[848]: fwln101i0: message: address in use
Jan 30 05:30:34 SnuUM780 dhcpcd[848]: fwln101i0: soliciting a DHCP lease
Jan 30 05:30:37 SnuUM780 dhcpcd[848]: fwpr101p0: probing for an IPv4LL address
Jan 30 05:30:38 SnuUM780 dhcpcd[848]: fwln101i0: offered 192.168.1.125 from 192.168.1.1
Jan 30 05:30:38 SnuUM780 dhcpcd[848]: fwln101i0: ignoring offer of 192.168.1.125 from 192.168.1.1
Jan 30 05:30:38 SnuUM780 dhcpcd[848]: fwln101i0: ignoring offer of 192.168.1.125 from 192.168.1.1
Jan 30 05:30:38 SnuUM780 dhcpcd[848]: fwln101i0: ignoring offer of 192.168.1.125 from 192.168.1.1
Jan 30 05:30:38 SnuUM780 dhcpcd[848]: fwln101i0: probing address 192.168.1.125/24
Jan 30 05:30:43 SnuUM780 dhcpcd[848]: fwln101i0: leased 192.168.1.125 for 43200 seconds
Jan 30 05:30:43 SnuUM780 dhcpcd[848]: fwln101i0: adding route to 192.168.1.0/24
Jan 30 05:30:43 SnuUM780 dhcpcd[848]: fwln101i0: adding default route via 192.168.1.1
Jan 30 05:30:43 SnuUM780 dhcpcd[848]: fwpr101p0: using IPv4LL address 169.254.140.84
Jan 30 05:30:43 SnuUM780 dhcpcd[848]: fwpr101p0: adding route to 169.254.0.0/16
Jan 30 05:31:26 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 8
Jan 30 05:31:34 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 14
Jan 30 05:31:48 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 15
Jan 30 05:32:03 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 20
Jan 30 05:32:23 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 4
Jan 30 05:32:27 SnuUM780 dhclient[1025]: No DHCPOFFERS received.
Jan 30 05:32:27 SnuUM780 dhclient[1025]: No working leases in persistent database - sleeping.
Jan 30 05:34:17 SnuUM780 pvedaemon[1335]: <root@pam> starting task UPID:SnuUM780:00001458:0001E608:65B8FAD9:vncproxy:100:root@pam:
Jan 30 05:34:17 SnuUM780 pvedaemon[5208]: starting vnc proxy UPID:SnuUM780:00001458:0001E608:65B8FAD9:vncproxy:100:root@pam:
Jan 30 05:35:18 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 3
Jan 30 05:35:21 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 3
Jan 30 05:35:24 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 5
Jan 30 05:35:29 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 10
Jan 30 05:35:39 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 10
Jan 30 05:35:49 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 15
Jan 30 05:36:04 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 14
Jan 30 05:36:15 SnuUM780 pveproxy[1340]: worker 1343 finished
Jan 30 05:36:15 SnuUM780 pveproxy[1340]: starting 1 worker(s)
Jan 30 05:36:15 SnuUM780 pveproxy[1340]: worker 5559 started
Jan 30 05:36:17 SnuUM780 pveproxy[5558]: got inotify poll request in wrong process - disabling inotify
Jan 30 05:36:18 SnuUM780 dhclient[1025]: DHCPDISCOVER on wlp2s0 to 255.255.255.255 port 67 interval 1
Jan 30 05:36:19 SnuUM780 dhclient[1025]: No DHCPOFFERS received.
Jan 30 05:36:19 SnuUM780 dhclient[1025]: No working leases in persistent database - sleeping.
lines 2234-2286/2286 (END)

but crash was at 05:37:xx, so the logs didn't even show the crash. There's no trace of it even.
 
Hi, my guess is that this is either a cpu (hardware) issue, or a kvm bug (kernel) but also cpu related.

could you try to

set ignore_msrs to 1/Y in sysfs? like this:
Code:
echo Y > /sys/module/kvm/parameters/ignore_msrs
(this way it will be only set temporarily, otherwise you have to use a file in /etc/modprobe.d, see 'man modprobe.d' and 'modinfo kvm' for details on how to do that)

or could you try with a cpu model other than 'host' e.g. with kvm64 ?
 
Hi, my guess is that this is either a cpu (hardware) issue, or a kvm bug (kernel) but also cpu related.

could you try to

set ignore_msrs to 1/Y in sysfs? like this:
Code:
echo Y > /sys/module/kvm/parameters/ignore_msrs
(this way it will be only set temporarily, otherwise you have to use a file in /etc/modprobe.d, see 'man modprobe.d' and 'modinfo kvm' for details on how to do that)

or could you try with a cpu model other than 'host' e.g. with kvm64 ?

Hi, it works! Thank you for the reply!

I switched the cpu model from host to kvm64 like you suggested and I am now able to load all 400 of my tabs upon resume.

Do you happen to know why? I know the 7840HS chipset is very new.

Maybe kernels are not yet ready for the 7840/7940H/S/X chipset?
What is the optimal CPU version to use? I will try a few more like x86-64-v2-AES or v3/v4, but I thought "host" was the safest option.

How did you know? What was your thought process? Because you were right on the ball.
 
How did you know? What was your thought process? Because you were right on the ball.
it was more an educated guess. the symptom (spontaneous crashing of the host without any logs) indicated a cpu/hardware problem to me (memory issues are a bit unpredictable and result in varying issues normally, disk related thing are often more obvious)
also the lines
Jan 30 01:53:44 SnuUM780 kernel: kvm_amd: kvm [2449]: vcpu3, guest rIP: 0xfffff85ea54bad89 Unhandled WRMSR(0xc0010115) = 0x0
tells me that the vm might be doing some weird/unusual/unsupported things with the cpu so i leaned in that direction

it still might be something different. for example it could be a hardware issue from e.g. the power supply and using a 'lesser' cpu type than host might not use as much power

edit: sry i did not answer all questions:

Do you happen to know why? I know the 7840HS chipset is very new.
no, but it could be that there is some bug in the kernel of course
What is the optimal CPU version to use? I will try a few more like x86-64-v2-AES or v3/v4, but I thought "host" was the safest option.
the new default is the x86-64-v2-AES one, but i'd test different models (the one aligning with your physical host should be a good start) and see what brings the most performance
 
Last edited:
  • Like
Reactions: Snuupy
it was more an educated guess. the symptom (spontaneous crashing of the host without any logs) indicated a cpu/hardware problem to me (memory issues are a bit unpredictable and result in varying issues normally, disk related thing are often more obvious)
also the lines
tells me that the vm might be doing some weird/unusual/unsupported things with the cpu so i leaned in that direction
I did search for "Unhandled WRMSR" errors, the forums told me to ignore it! ;_;
it still might be something different. for example it could be a hardware issue from e.g. the power supply and using a 'lesser' cpu type than host might not use as much power

I don't think it's a PSU issue, I stress tested with s-tui for 10-15 mins @ 60-65W TDP and there were no stability issues although I can see how that wouldn't necessarily rule it out

the new default is the x86-64-v2-AES one, but i'd test different models (the one aligning with your physical host should be a good start) and see what brings the most performance
I ran it in x86-64-v2-AES, no issues again. Then I wondered what CPU version a 7840HS is, and this script says it should be v4: https://github.com/HenrikBengtsson/x86-64-level

Code:
root@SnuUM780:~# ./x86-64-level --verbose
4

Is there any way to confirm what proxmox sets if I click "Host" in the UI and if it is consistent with this script checker? If both are setting x86-64-v4...then I'm a bit lost as to why it stopped crashing now.
 
'host' is a special setting that passes through most info and simply uses the host capability, it is not an explicit x86 level per se
you could try the various epyc models, they should probably match the consumer models too
alternatively you can also try 'max' which is similar to host, but not compeletly identical
quote from the qemu help :

Code:
x86 host                  processor with all supported host features                            
x86 max                   Enables all features supported by the accelerator in the current host

or you can try to manually create a cpu model:

https://pve.proxmox.com/wiki/Manual:_cpu-models.conf
 
  • Like
Reactions: Snuupy
'host' is a special setting that passes through most info and simply uses the host capability, it is not an explicit x86 level per se
you could try the various epyc models, they should probably match the consumer models too
alternatively you can also try 'max' which is similar to host, but not compeletly identical
quote from the qemu help :

Code:
x86 host                  processor with all supported host features                          
x86 max                   Enables all features supported by the accelerator in the current host

or you can try to manually create a cpu model:

https://pve.proxmox.com/wiki/Manual:_cpu-models.conf
haven't yet had a chance to test everything yet but "max" crashes the host. Looks like anything with dynamic detection of CPU capabilities/features breaks it for some reason. Is this something I should report upstream (i.e. to QEMU)?
 
haven't yet had a chance to test everything yet but "max" crashes the host. Looks like anything with dynamic detection of CPU capabilities/features breaks it for some reason. Is this something I should report upstream (i.e. to QEMU)?
Maybe but it does sound like a hardware issue, where some kind of load/stress trigger a fault (memory, CPU, motherboard or lack of power).
 
Maybe but it does sound like a hardware issue, where some kind of load/stress trigger a fault (memory, CPU, motherboard or lack of power).
That's possible, but when I loaded all the tabs bare metal on windows 11, there was no problem. There have previously been reports of UM790 models crashing at either low power states or under stress, but a new motherboard revision was released to fix those issues and I supposedly have the new mobo revision (v1 xx-A, old was v1.00) already. It could be another mobo fault but I don't know how to test for this. Is there a specific stress-ng test to run to test for this type of workload?

I could also open a warranty/RMA request if I am absolutely sure it's a hardware issue for exchanging the unit, but I will have to pay shipping to China and possible back to Canada as well so total would either be $100 (round trip) or $50 (1way).

Are there any other tests I should run?
 
Last edited:
you could try the various epyc models, they should probably match the consumer models too
alternatively you can also try 'max' which is similar to host, but not compeletly identical
quote from the qemu help :

Code:
x86 host                  processor with all supported host features               
x86 max                   Enables all features supported by the accelerator in the current host

or you can try to manually create a cpu model:

https://pve.proxmox.com/wiki/Manual:_cpu-models.conf

How do I generate a cpu-models.conf that is equivalent or as close as possible to what setting cputype: host in the UI would give me? That way I can test each flag one by one (or binary search for the flag that is causing the problem). For example, svm, among other flags (supported in host) is not there if I select EPYC-v3 but I do not have a diff between the cputype: host and EPYC-v3. I would use EPYC-v4 (zen4), but that is not listed in proxmox (though it is available in QEMU).

I tried lscpu in the host and tried to add all those flags to the cpu-models.conf but that obviously did not work. It adds a bunch of flags that are unsupported (by QEMU?)

According to this (outdated) doc, https://wiki.qemu.org/Features/CPUModels#-cpu_host_vs_-cpu_best

  • -cpu host will be the "all-you-can-enable" mode, that will enable every bit from GET_SUPPORTED_CPUID on the VCPU
one would think hostmeans match, not enable everything optimistically and hope it works

so maybe one of the emulated cpu feature sets/flags, which is not necessarily enabled on baremetal, is being enabled and is causing an issue on VMs.

On the host itself:

Code:
root@SnuUM780:~# lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         48 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               AuthenticAMD
  BIOS Vendor ID:        Advanced Micro Devices, Inc.
  Model name:            AMD Ryzen 7 7840HS w/ Radeon 780M Graphics
    BIOS Model name:     AMD Ryzen 7 7840HS w/ Radeon 780M Graphics      Unknown CPU @ 3.8GHz
    BIOS CPU family:     107
    CPU family:          25
    Model:               116
    Thread(s) per core:  2
    Core(s) per socket:  8
    Socket(s):           1
    Stepping:            1
    CPU(s) scaling MHz:  14%
    CPU max MHz:         6080.0000
    CPU min MHz:         400.0000
    BogoMIPS:            7585.58
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht sysca
                         ll nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apici
                         d aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16
                         c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt
                          tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon
                         _v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx5
                         12dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1
                         xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoin
                         vd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthre
                         shold v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmu
                         lqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d

In a VM with "cputype: host":

Code:
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities

Which means in a VM, the following flags have been removed (compared to the host): constant_tsc amd_lbr_v2 nonstop_tsc aperfmperf rapl monitor extapic ibs skinit wdttce topoext perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate mba perfmon_v2 cqm rdt_a cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local irperf rdpru cppc svm_lock decodeassists x2avic v_spec_ctrl overflow_recov succor smca

and in a VM, the following flags have been added (compared to host): tsc_known_freq tsc_deadline_timer hypervisor ssbd tsc_adjust wbnoinvd arch_capabilities

so maybe one of the emulated cpu feature sets/flags, which is not necessarily enabled on baremetal, is being enabled and is causing an issue on VMs? Once I add those 7 cpuflags to cpu-models.conf, qemu complains that they are not valid flags (such as kvm: Property 'host-x86_64-cpu.arch_capabilities' not found). I remove each of the "invalid flags" until the VM is willing to start up, so I end up with this:

Edit: I am testing the following config:

Code:
cpu-model: 7840HS
    flags -hypervisor;-ssbd;-tsc_adjust;-wbnoinvd
    phys-bits host
    hv-vendor-id proxmox
    reported-model host

Please tell me if there is any better way to do this than a minus sign for each flag (especially if software in the future adds more flags that I have to manually diff to disable)

Edit 2: Upon testing, the VM no longer crashes when loading the 400 session resume tabs. To confirm I am not insane, I changed the config back to cputype: host, tried to load the browser again in Win11 VM resuming loading 400 tabs, and the host immediately crashed. So it turns out I am not insane :)

What are the best practices involving this type of thing? What is the expected behavior when an emulated flag is supposed to be called but the cpu doesn't natively support it? It's not supposed to crash (the host!) I assume. Do you want me to go through and test which flag specifically caused the issue? Or is there something else that should be done?
 
Last edited:
Hello there,
just here to say that I also have a UM790Pro and also have the entire Proxmox host crash when I do some stuff in a VM with CPU type = host.

When I say "crash", I mean it restarts.
There are absolutely no logs. I see the usual log entries, and then I have a "-- boot" line right where it crashes.
So I guess, the machine simply restarts.

If I change to CPU type to x68-64-v4 I have no crashes. I tested that for 1 week and had no crashes at all. Super stable.
Then today I tried again to set CPU to host and had in barely 10 minutes a Proxmox crash.
Then found this thread.

There's something else too.
When I set to host, the VM feels a bit more sluggish to me. Some things take longer.
Windows starts slower.
Tested as well on a MacOS VM and with CPU type = host I also had crashes.

One more thing, for the UM790Pro we can change the power profile of the machine in the BIOS.
I set everything the Performance, and put the max watts to 95W or something.
But this didn't do anything regarding the crashes.
I also stress tested the machine outside of the VM, and had no crashes.

It is very specific to CPU type = host

My CPU
AMD Ryzen 9 7940HS

EDIT:
Right now I'm trying CPU type = host + `echo Y > /sys/module/kvm/parameters/ignore_msrs`
And will report back later if that changes anything
 
Last edited:
  • Like
Reactions: Snuupy
Hello there,
just here to say that I also have a UM790Pro and also have the entire Proxmox host crash when I do some stuff in a VM with CPU type = host.

When I say "crash", I mean it restarts.
There are absolutely no logs. I see the usual log entries, and then I have a "-- boot" line right where it crashes.
So I guess, the machine simply restarts.

If I change to CPU type to x68-64-v4 I have no crashes. I tested that for 1 week and had no crashes at all. Super stable.
Then today I tried again to set CPU to host and had in barely 10 minutes a Proxmox crash.
Then found this thread.

There's something else too.
When I set to host, the VM feels a bit more sluggish to me. Some things take longer.
Windows starts slower.
Tested as well on a MacOS VM and with CPU type = host I also had crashes.

One more thing, for the UM790Pro we can change the power profile of the machine in the BIOS.
I set everything the Performance, and put the max watts to 95W or something.
But this didn't do anything regarding the crashes.
I also stress tested the machine outside of the VM, and had no crashes.

It is very specific to CPU type = host

My CPU
AMD Ryzen 9 7940HS

EDIT:
Right now I'm trying CPU type = host + `echo Y > /sys/module/kvm/parameters/ignore_msrs`
And will report back later if that changes anything
Hi, in your /etc/pve/virtual-guest/cpu-models.conf try:

Code:
cpu-model: 7840HS
    flags -hypervisor;-ssbd;-tsc_adjust;-wbnoinvd
    phys-bits host
    hv-vendor-id proxmox
    reported-model host

you can replace 7840HS with 7940HS, they are the same minus binning differences.

also in VM set cpu as custom-7840HS (or custom-7940HS)
 
Last edited:
I also have a UM790Pro and also have the entire Proxmox host crash when I do some stuff in a VM with CPU type = host.
UM790 = UM780, only difference being 790 has a 7940HS and a 780 has a 7840HS. Drivers, BIOS, almost everything is the same. 7940HS has an unlocked CPU multiplier so you can overclock it, but because the minipcs are limited by thermals/power (TDP) anyway, they perform almost identically to the degree that the same supporting software is used even. I bought a UM780 because it was so much cheaper than a UM790. I wrote a thread on it here: https://www.reddit.com/r/MiniPCs/comments/189gm7p/um780_initial_impressions/

When I say "crash", I mean it restarts.
Yes, that is my experience exactly.
There are absolutely no logs. I see the usual log entries, and then I have a "-- boot" line right where it crashes.
So I guess, the machine simply restarts.
Yes.
If I change to CPU type to x68-64-v4 I have no crashes. I tested that for 1 week and had no crashes at all. Super stable.
I also had uptime of 20+ days on Windows 11 running on bare metal. This is what gave me the confidence to report I highly doubted it was a hardware issue. The chipset is very new to the degree that a lot of its features are still being added to the kernel, drivers, etc. even on the Windows side, let alone the Linux side. For example, the IPU (AI chip) drivers have only just been released. ROCm 6.0 was just released a month ago to support 780M (the igpu in UM780/790), so I am not surprised we are running into issues like this. It's the cost of being on bleeding edge hardware :)

Then today I tried again to set CPU to host and had in barely 10 minutes a Proxmox crash.
My Win11 VM now has an uptime of 3+ hours (previously crashing within 5 mins) so I believe the above config solves the crashing issue. I would also disable PSS and C6 until you have everything stable, then re-enable PSS/C6 one by one. Currently I have PSS enabled and still no crashing (yet, fingers crossed). I expect I may not be able to enable C6 at the end anyway.

Then found this thread.

There's something else too.
When I set to host, the VM feels a bit more sluggish to me. Some things take longer.
Windows starts slower.
Tested as well on a MacOS VM and with CPU type = host I also had crashes.
I have not yet tried macos, I know macos doesn't have hardware acceleration so I was going to put it on a slower device so it wouldn't use up my 7840HS cpu cycles.

One more thing, for the UM790Pro we can change the power profile of the machine in the BIOS.
I set everything the Performance, and put the max watts to 95W or something.
setting TDP above 60-65W does not make any difference to performance because it is thermally limited, it will thermal throttle around 90C. I recommend you lower TDP back down so performance can be sustained instead of hitting thermal limits and then experiencing throttling. On your motherboard, next to the RAM, is your mobo revision v1.00 or v1.00xxxxxA ? Mine is the latter. There are many reports saying v1.00 crashes a lot due to ram instability from a missing capacitor (poor design) and they had to RMA the device. Minisforum exchanged it for them, but from what I've heard they had to pay to ship it back to Hong Kong (Minisforum covered return shipping if you bought it from a local retailer like Amazon)

But this didn't do anything regarding the crashes.
I also stress tested the machine outside of the VM, and had no crashes.
Are you on BIOS 1.09? It has been very stable for me. I even enabled PSS again (while still running disable-c6)

It is very specific to CPU type = host
Try the config in my previous post, let me know
if it doesn't work give me the output of lscpu from host and lscpu from a VM but with cputype: host
My CPU
AMD Ryzen 9 7940HS

EDIT:
Right now I'm trying CPU type = host + `echo Y > /sys/module/kvm/parameters/ignore_msrs`
afaik this is cosmetic so it doesn't show up in journalctl, it doesn't change anything afaik
And will report back later if that changes anything
Try the config in my previous post, let me know
if it doesn't work give me the output of lscpu from host and lscpu from a VM but with cputype: host

I don't think these issues are hardware issues, I think it's just new hardware that hasn't had software updated to support it yet.
 
Last edited:
setting TDP above 60-65W does not make any difference to performance because it is thermally limited, it will thermal throttle around 90C. I recommend you lower TDP back down so performance can be sustained instead of hitting thermal limits and then experiencing throttling. On your motherboard, next to the RAM, is your mobo revision v1.00 or v1.00xxxxxA ? Mine is the latter. There are many reports saying v1.00 crashes a lot due to ram instability from a missing capacitor (poor design) and they had to RMA the device. Minisforum exchanged it for them, but from what I've heard they had to pay to ship it back to Hong Kong (Minisforum covered return shipping if you bought it from a local retailer like Amazon)


Are you on BIOS 1.09? It has been very stable for me. I even enabled PSS again (while still running disable-c6)


Try the config in my previous post, let me know
if it doesn't work give me the output of lscpu from host and lscpu from a VM but with cputype: host

afaik this is cosmetic so it doesn't show up in journalctl, it doesn't change anything afaik

Try the config in my previous post, let me know
if it doesn't work give me the output of lscpu from host and lscpu from a VM but with cputype: host

I don't think these issues are hardware issues, I think it's just new hardware that hasn't had software updated to support it yet.

I set up the TDP just in case there was not enough juice to run the CPU with intensive tasks, as I thought maybe the reboots were due to not enough power. But that doesn't seem to be the case. I also want to revert back to balanced mode/auto. As the IDLE cost me about extra 5W right now for no apparent benefits.

My version is V1.0-SKU16-A, and BIOS is 1.09.
I do have C6 disabled, but that's fairly recent. I used the machine for a month or so with C6 enabled. I had no issues.
It's when I tried to play with VMs and used CPU type = host that I had crashes, and I was told to disable C6
I do not think I have changed any PSS settings though. I don't recall.

Try the config in my previous post, let me know

I'm not sure which config you are talking about. You mean using CPU type = `x86-64-v4` ? Yeah this works fine. Has been for a week without crash.
 
I set up the TDP just in case there was not enough juice to run the CPU with intensive tasks, as I thought maybe the reboots were due to not enough power. But that doesn't seem to be the case. I also want to revert back to balanced mode/auto. As the IDLE cost me about extra 5W right now for no apparent benefits.

My version is V1.0-SKU16-A, and BIOS is 1.09.
I do have C6 disabled, but that's fairly recent. I used the machine for a month or so with C6 enabled. I had no issues.
It's when I tried to play with VMs and used CPU type = host that I had crashes, and I was told to disable C6
I do not think I have changed any PSS settings though. I don't recall.



I'm not sure which config you are talking about. You mean using CPU type = `x86-64-v4` ? Yeah this works fine. Has been for a week without crash.

No. It looks like the previous message is "This message is awaiting moderator approval, and is invisible to normal visitors." for some reason, so let me try to put it here again:

in /etc/pve/virtual-guest/cpu-models.conf

Code:
cpu-model: 7840HS
    flags -hypervisor;-ssbd;-tsc_adjust;-wbnoinvd
    phys-bits host
    hv-vendor-id proxmox
    reported-model host

Edit: nope, that just killed it. Above config doesn't work.
Edit 2: the above doesn't fix all the crashing, but it's definitely more stable. Before it was a guaranteed crash on every time the browser loaded up. Now it's only crashing sometimes. I don't know why that is. Maybe someone will have a suggestion, but there is definitely something weird going on with the flags leading to crashing of the host.
 
Last edited:
lscpu from VM (Ubuntu guest)

Code:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   48 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          1
On-line CPU(s) list:             0
Vendor ID:                       AuthenticAMD

Model name:                      AMD Ryzen 9 7940HS w/ Radeon 780M Graphics


CPU family:                      25
Model:                           116
Thread(s) per core:              1
Core(s) per socket:              1
Socket(s):                       1
Stepping:                        1



BogoMIPS:                        7984.88
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization:                  AMD-V
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       64 KiB (1 instance)
L1i cache:                       64 KiB (1 instance)
L2 cache:                        512 KiB (1 instance)
L3 cache:                        16 MiB (1 instance)
NUMA node(s):                    1
NUMA node0 CPU(s):               0

Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected

Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

lscpu from pve host
Code:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      48 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             16
On-line CPU(s) list:                0-15
Vendor ID:                          AuthenticAMD
BIOS Vendor ID:                     Advanced Micro Devices, Inc.
Model name:                         AMD Ryzen 9 7940HS w/ Radeon 780M Graphics
BIOS Model name:                    AMD Ryzen 9 7940HS w/ Radeon 780M Graphics      Unknown CPU @ 4.0GHz
BIOS CPU family:                    107
CPU family:                         25
Model:                              116
Thread(s) per core:                 2
Core(s) per socket:                 8
Socket(s):                          1
Stepping:                           1
CPU(s) scaling MHz:                 26%
CPU max MHz:                        6228.0000
CPU min MHz:                        400.0000
BogoMIPS:                           7985.13
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization:                     AMD-V


L1d cache:                          256 KiB (8 instances)
L1i cache:                          256 KiB (8 instances)
L2 cache:                           8 MiB (8 instances)
L3 cache:                           16 MiB (1 instance)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced / Automatic IBRS, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

To recap, here are the flags on the VM which are not on the host:
Code:
arch_capabilities
hypervisor
tsc_adjust
tsc_deadline_timer
tsc_known_freq

Here are the flags on the host which are not on the VM:
Code:
amd_lbr_v2
aperfmperf
bpext
cat_l3
cdp_l3
constant_tsc
cpb
cppc
cqm
cqm_llc
cqm_mbm_local
cqm_mbm_total
cqm_occup_llc
decodeassists
extd_apicid
ht
hw_pstate
ibrs_enhanced
ibs
irperf
mce
monitor
mwaitx
nonstop_tsc
overflow_recov
perfctr_llc
perfctr_nb
perfmon_v2
rapl
rdpru
rdt_a
skinit
smca
succor
svm_lock
tce
topoext
v_spec_ctrl
vnmi
wdt
x2avic
 

Attachments

  • lscpu-host.txt
    3.3 KB · Views: 0
  • lscpu-vm.txt
    2.6 KB · Views: 0
  • flags_comparison.csv.txt
    2.2 KB · Views: 0
So in /etc/pve/virtual-guest/cpu-models.conf did you try:

Code:
cpu-model: 7840HS
    flags -arch_capabilities;-hypervisor;-tsc_adjust;-tsc_deadline_timer;-tsc_known_freq
    phys-bits host
    hv-vendor-id proxmox
    reported-model host

It should complain that some options aren't found or whatever and then you can remove that from the flags (like kvm: Property 'host-x86_64-cpu.arch_capabilities' not found, so remove arch_capabilities from flags and repeat until VM is able to boot)
 
  • Like
Reactions: RedKage
So in /etc/pve/virtual-guest/cpu-models.conf did you try:

Code:
cpu-model: 7840HS
    flags -arch_capabilities;-hypervisor;-tsc_adjust;-tsc_deadline_timer;-tsc_known_freq
    phys-bits host
    hv-vendor-id proxmox
    reported-model host

It should complain that some options aren't found or whatever and then you can remove that from the flags (like kvm: Property 'host-x86_64-cpu.arch_capabilities' not found, so remove arch_capabilities from flags and repeat until VM is able to boot)

So far I have been running with cputype host and ignore_msrs = Y and have had no crashes yet.
Didn't do yet the custom CPU with custom flags. I'll let it run more and see what happens before attempting this.

EDIT
Indeed I have MSR errors, dmesg -wH
Code:
[Feb 2 08:28] kvm_msr_ignored_check: 30 callbacks suppressed
[  +0.000002] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000010] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000007] kvm: kvm [16483]: ignored rdmsr: 0xc0010064 data 0x0
[  +0.000004] kvm: kvm [16483]: ignored rdmsr: 0xc0010065 data 0x0
[  +0.000003] kvm: kvm [16483]: ignored rdmsr: 0xc0010066 data 0x0
[  +0.000004] kvm: kvm [16483]: ignored rdmsr: 0xc0010067 data 0x0
[  +0.000003] kvm: kvm [16483]: ignored rdmsr: 0xc0010068 data 0x0
[  +0.000004] kvm: kvm [16483]: ignored rdmsr: 0xc0010069 data 0x0
[  +0.000004] kvm: kvm [16483]: ignored rdmsr: 0xc001006a data 0x0
[  +0.000003] kvm: kvm [16483]: ignored rdmsr: 0xc001006b data 0x0
[  +5.014793] kvm_msr_ignored_check: 324 callbacks suppressed
[  +0.000002] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000024] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +0.000086] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000066] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.016800] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000025] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +0.000063] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000079] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.016807] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000022] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +4.972356] kvm_msr_ignored_check: 426 callbacks suppressed
[  +0.000002] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000026] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +0.000098] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000062] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.016805] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000036] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +0.000139] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000067] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.016800] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000027] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +5.257669] kvm_msr_ignored_check: 396 callbacks suppressed
[  +0.000002] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000089] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.016807] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000032] kvm: kvm [16483]: ignored rdmsr: 0xc001029b data 0x0
[  +0.000011] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +0.001619] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000071] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.016772] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000025] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +0.029114] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +4.938296] kvm_msr_ignored_check: 429 callbacks suppressed
[  +0.000003] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000035] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +0.000115] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000064] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.016781] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000034] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
[  +0.000091] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000063] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.016774] kvm: kvm [16483]: ignored rdmsr: 0xc0010293 data 0x0
[  +0.000030] kvm: kvm [16483]: ignored rdmsr: 0xc001029a data 0x0
 
Last edited:
So I have been testing a little bit with ` echo "Y" > /sys/module/kvm/parameters/ignore_msrs` and had no crashes for a while
Then I reverted to ` echo 0 > /sys/module/kvm/parameters/ignore_msrs` , and confirmed `ignore_msrs = N`

And I created a new entry in `/etc/pve/virtual-guest/cpu-models.conf ` with the following inside:
Code:
cpu-model: 7940HS
    flags -hypervisor;-tsc_adjust;
    phys-bits host
    hv-vendor-id proxmox
    reported-model host

As @Snuupy said, I had to remove some flags because it reported they were unknown. Namely arch_capabilities, tsc_deadline_timer, tsc_known_freq
I'm using this custom CPU now, and so far have had no crashes...
 
Last edited:
  • Like
Reactions: Snuupy

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!