Windows Server 2022 VM hängt sich auf im Reboot nach installation der Hyper-V Rolle

milfb0ii

New Member
Jul 25, 2024
4
0
1
Hi,
___
mein Rechner:
Minisforum Nab9
Processor
Intel® Core™ i9-12900HK Processor, 14 Cores/20 Threads
(24M Cache, up to 5.0 GHz)
Graphics
Intel® Iris® Xe Graphics
Memory
DDR4 8GB×2 Dual channel
Storage
M.2 2280 512GB
___
Windows Server 2022 VM hängt sich auf im reboot nach installation der Hyper-V Rolle

1721935742809.png

Um die VM zu beenden muss ich den Prozess beenden:


Code:
kill -9 PID

Nested Virtualization ist eingeschaltet:



Code:
root@pve:~# cat /sys/module/kvm_intel/parameters/nested
Y


CPU Type ist auf host gestellt

Alle Virtio Treiber sind richtig installiert, Windows Updates durchgeführt.

___

Die letzten logs:

Code:
Jul 25 21:25:43 pve pvedaemon[12813]: <root@pam> end task UPID:pve:000043D6:00093AEE:66A2A6B7:qmstart:101:root@pam: OK
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 9/KVM/17472 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 4/KVM/17467 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 6/KVM/17469 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 10/KVM/17473 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 5/KVM/17468 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 8/KVM/17471 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 2/KVM/17465 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 3/KVM/17466 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 7/KVM/17470 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 11/KVM/17474 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:47 pve pvedaemon[17479]: starting vnc proxy UPID:pve:00004447:00093C8A:66A2A6BB:vncproxy:101:root@pam:
Jul 25 21:25:47 pve pvedaemon[1228]: <root@pam> starting task UPID:pve:00004447:00093C8A:66A2A6BB:vncproxy:101:root@pam:
Jul 25 21:27:50 pve pvedaemon[12813]: <root@pam> successful auth for user 'root@pam'
Jul 25 21:30:14 pve pvedaemon[1228]: <root@pam> end task UPID:pve:00004447:00093C8A:66A2A6BB:vncproxy:101:root@pam: OK
Jul 25 21:30:20 pve pvedaemon[1227]: <root@pam> starting task UPID:pve:000046EE:0009A733:66A2A7CC:vncshell::root@pam:
Jul 25 21:30:20 pve pvedaemon[18158]: starting termproxy UPID:pve:000046EE:0009A733:66A2A7CC:vncshell::root@pam:
Jul 25 21:30:20 pve pvedaemon[1228]: <root@pam> successful auth for user 'root@pam'
Jul 25 21:30:20 pve login[18161]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jul 25 21:30:20 pve systemd-logind[862]: New session 8 of user root.
Jul 25 21:30:20 pve systemd[1]: Started session-8.scope - Session 8 of User root.
Jul 25 21:30:20 pve login[18166]: ROOT LOGIN  on '/dev/pts/1'
Jul 25 21:30:57 pve systemd[1]: session-8.scope: Deactivated successfully.
Jul 25 21:30:57 pve systemd-logind[862]: Session 8 logged out. Waiting for processes to exit.
Jul 25 21:30:57 pve systemd-logind[862]: Removed session 8.
Jul 25 21:30:57 pve pvedaemon[1227]: <root@pam> end task UPID:pve:000046EE:0009A733:66A2A7CC:vncshell::root@pam: OK
Jul 25 21:31:09 pve pvedaemon[1228]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
Jul 25 21:31:28 pve pvedaemon[1227]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
Jul 25 21:37:02 pve pvedaemon[1227]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
Jul 25 21:25:43 pve pvedaemon[12813]: <root@pam> end task UPID:pve:000043D6:00093AEE:66A2A6B7:qmstart:101:root@pam: OK
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 9/KVM/17472 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 4/KVM/17467 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 6/KVM/17469 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 10/KVM/17473 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 5/KVM/17468 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 8/KVM/17471 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 2/KVM/17465 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 3/KVM/17466 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 7/KVM/17470 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:43 pve kernel: x86/split lock detection: #AC: CPU 11/KVM/17474 took a split_lock trap at address: 0x7ee9d050
Jul 25 21:25:47 pve pvedaemon[17479]: starting vnc proxy UPID:pve:00004447:00093C8A:66A2A6BB:vncproxy:101:root@pam:
Jul 25 21:25:47 pve pvedaemon[1228]: <root@pam> starting task UPID:pve:00004447:00093C8A:66A2A6BB:vncproxy:101:root@pam:
Jul 25 21:27:50 pve pvedaemon[12813]: <root@pam> successful auth for user 'root@pam'
Jul 25 21:30:14 pve pvedaemon[1228]: <root@pam> end task UPID:pve:00004447:00093C8A:66A2A6BB:vncproxy:101:root@pam: OK
Jul 25 21:30:20 pve pvedaemon[1227]: <root@pam> starting task UPID:pve:000046EE:0009A733:66A2A7CC:vncshell::root@pam:
Jul 25 21:30:20 pve pvedaemon[18158]: starting termproxy UPID:pve:000046EE:0009A733:66A2A7CC:vncshell::root@pam:
Jul 25 21:30:20 pve pvedaemon[1228]: <root@pam> successful auth for user 'root@pam'
Jul 25 21:30:20 pve login[18161]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jul 25 21:30:20 pve systemd-logind[862]: New session 8 of user root.
Jul 25 21:30:20 pve systemd[1]: Started session-8.scope - Session 8 of User root.
Jul 25 21:30:20 pve login[18166]: ROOT LOGIN  on '/dev/pts/1'
Jul 25 21:30:57 pve systemd[1]: session-8.scope: Deactivated successfully.
Jul 25 21:30:57 pve systemd-logind[862]: Session 8 logged out. Waiting for processes to exit.
Jul 25 21:30:57 pve systemd-logind[862]: Removed session 8.
Jul 25 21:30:57 pve pvedaemon[1227]: <root@pam> end task UPID:pve:000046EE:0009A733:66A2A7CC:vncshell::root@pam: OK
Jul 25 21:31:09 pve pvedaemon[1228]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
Jul 25 21:31:28 pve pvedaemon[1227]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
Jul 25 21:37:02 pve pvedaemon[1227]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout


Irgendeine Idee wie ich das Problem angehen kann?
 
Ich vermute die Client CPU kann keine Nested Virtualisierung. Das geht in der Regel nur mit Server CPUs.
 
Ich vermute die Client CPU kann keine Nested Virtualisierung. Das geht in der Regel nur mit Server CPUs.
Nein, ich habe bereits eine Inception laufen mit nem weiteren proxmox im proxmox und darauf dann debian. Funktioniert mit der Hardware wunderbar. Das liegt wiedermal am sch** Windows...
 
Das funktioniert mit PVE in PVE nur wenn qemu genutzt wird. Wenn du in deinem Nested PVE den CPU Typ host geben willst. wird die VM auch nicht starten.
 
Das funktioniert mit PVE in PVE nur wenn qemu genutzt wird. Wenn du in deinem Nested PVE den CPU Typ host geben willst. wird die VM auch nicht starten.

Ich hab auch dort den cpu type auf host gesetzt. Das funktioniert einwandfrei. Die meiste consumer hardware kann nested virtualization. Hab das sogar auf meinem alten Thinkpad hinbekommen.

Ich hab das ganze jetzt mal mit nem WinSrv2019 ausprobiert. Da funktioniert es problemlos. Scheint irgendein Problem mit der 2022 Version von Windows Server zu sein.
Werd jetzt erstmal damit arbeiten müssen.

Danke trotzdem bb
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!