[SOLVED] Nested Virtualization fails: KVM: entry failed, hardware error 0x7

GabrieleV

Renowned Member
May 20, 2013
56
6
73
Hello,
I have an host running PVE 7.2 (I named it PHYSICAL) where I configured a guest running PVE 6.2 (I named it NESTED).
Inside this nested PVE I configured a Guest (I named it GUESTNESTED)
Everything works good monthes ago (I use this nested virtualizazion only for testing ugrades).
I started GUESTNESTED yesterday, and the VM doesent start, it gives "internal-error" state.
On the NESTED PVE 6.2 i got in syslog

Jul 22 12:11:10 crovirt01 pvedaemon[1392]: <root@pam> starting task UPID:crovirt01:00000802:00002571:62DA77BE:qmstart:901:root@pam: Jul 22 12:11:10 crovirt01 pvedaemon[2050]: start VM 901: UPID:crovirt01:00000802:00002571:62DA77BE:qmstart:901:root@pam: Jul 22 12:11:11 crovirt01 systemd[1]: Created slice qemu.slice. Jul 22 12:11:11 crovirt01 systemd[1]: Started 901.scope. Jul 22 12:11:11 crovirt01 systemd-udevd[2066]: Using default interface naming scheme 'v240'. Jul 22 12:11:11 crovirt01 systemd-udevd[2066]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Jul 22 12:11:11 crovirt01 systemd-udevd[2066]: Could not generate persistent MAC address for tap901i0: No such file or directory Jul 22 12:11:11 crovirt01 kernel: [ 96.833256] device tap901i0 entered promiscuous mode Jul 22 12:11:11 crovirt01 kernel: [ 96.845830] vmbr0: port 2(tap901i0) entered blocking state Jul 22 12:11:11 crovirt01 kernel: [ 96.845832] vmbr0: port 2(tap901i0) entered disabled state Jul 22 12:11:11 crovirt01 kernel: [ 96.846064] vmbr0: port 2(tap901i0) entered blocking state Jul 22 12:11:11 crovirt01 kernel: [ 96.846066] vmbr0: port 2(tap901i0) entered forwarding state Jul 22 12:11:12 crovirt01 QEMU[2074]: KVM: entry failed, hardware error 0x7 Jul 22 12:11:12 crovirt01 QEMU[2074]: EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000f61 Jul 22 12:11:12 crovirt01 QEMU[2074]: ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000 Jul 22 12:11:12 crovirt01 QEMU[2074]: EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 Jul 22 12:11:12 crovirt01 QEMU[2074]: ES =0000 00000000 0000ffff 00009300 Jul 22 12:11:12 crovirt01 QEMU[2074]: CS =f000 ffff0000 0000ffff 00009b00 Jul 22 12:11:12 crovirt01 QEMU[2074]: SS =0000 00000000 0000ffff 00009300 Jul 22 12:11:12 crovirt01 QEMU[2074]: DS =0000 00000000 0000ffff 00009300 Jul 22 12:11:12 crovirt01 QEMU[2074]: FS =0000 00000000 0000ffff 00009300 Jul 22 12:11:12 crovirt01 QEMU[2074]: GS =0000 00000000 0000ffff 00009300 Jul 22 12:11:12 crovirt01 QEMU[2074]: LDT=0000 00000000 0000ffff 00008200 Jul 22 12:11:12 crovirt01 QEMU[2074]: TR =0000 00000000 0000ffff 00008b00 Jul 22 12:11:12 crovirt01 QEMU[2074]: GDT= 00000000 0000ffff Jul 22 12:11:12 crovirt01 QEMU[2074]: IDT= 00000000 0000ffff Jul 22 12:11:12 crovirt01 QEMU[2074]: CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000 Jul 22 12:11:12 crovirt01 QEMU[2074]: DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 Jul 22 12:11:12 crovirt01 QEMU[2074]: DR6=00000000ffff0ff0 DR7=0000000000000400 Jul 22 12:11:12 crovirt01 QEMU[2074]: EFER=0000000000000000 Jul 22 12:11:12 crovirt01 QEMU[2074]: Code=00 66 89 d8 66 e8 e1 a3 ff ff 66 83 c4 0c 66 5b 66 5e 66 c3 <ea> 5b e0 00 f0 30 36 2f 32 33 2f 39 39 00 fc 00 ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? Jul 22 12:11:12 crovirt01 kernel: [ 96.965907] set kvm_intel.dump_invalid_vmcs=1 to dump internal KVM state. Jul 22 12:11:12 crovirt01 pvedaemon[1392]: <root@pam> end task UPID:crovirt01:00000802:00002571:62DA77BE:qmstart:901:root@pam: OK

No changes in the configuration was done. only upgraded in the past PHYSICAL and NESTED.

# pveversion
pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.39-1-pve)


# pveversion
pve-manager/6.4-15/af7986e6 (running kernel: 5.4.195-1-pve)

Any advice ?
 
Hi,
sometimes different kernel versions lead to problems when doing nested virtualization. Maybe upgrading the Proxmox VE 6.4 install to kernel 5.11 helps? At least it's closer to 5.15 than 5.4 is.
 
sometimes different kernel versions lead to problems when doing nested virtualization. Maybe upgrading the Proxmox VE 6.4 install to kernel 5.11 helps? At least it's closer to 5.15 than 5.4 is.
I use these setup to track a production cluster, so I have to mimic the installed packages. I will test the upgrade from 6-x to 7.x, and I'll see...
In the meantime, on linux NESTEDguests I have disabled KVM acceleration to have the started.
 
I've upgraded the NESTED PVE hosts, now running 7.x as the PHYSICAL hosts.
Now NESTED GUESTs works with KVM acceleration enabled.
So can we say that is a regression for nested PVE running 6.x running under a physcal 7.x Host ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!