Nested virtualization suddenly doesnt work

PeterMarcusH.

Member
Apr 5, 2019
99
3
13
28
Hello guys :),

At the moment i'm hosting a windows vm with TeamCity and docker running on it. It's for my school project, and i was disappointed to find out, that the vm kept crashing at launch.
- I tried restoring it from a backup, same problem.
- I tried creating a new vm, enable windows hyper V (required for nested virtualization), and the same error occurs. From this i concluded, that something must be wrong with my proxmox virtualization? What puzzles me the most, is that it has been working just fine for the last month or two...how?? Furthermore, whenever i try to enable the CPU flag (hv-evmcs) i cant start the VM, is this normal?
 
From this i concluded, that something must be wrong with my proxmox virtualization? What puzzles me the most, is that it has been working just fine for the last month or two...how?? Furthermore, whenever i try to enable the CPU flag (hv-evmcs) i cant start the VM, is this normal?
Please post a pvereport -v. Did you run any updates recently?

enable windows hyper V (required for nested virtualization),
Aside, do you run something besides docker, that would need nested virtualization?
 
Please post a pvereport -v. Did you run any updates recently?

Aside, do you run something besides docker, that would need nested virtualization?

- Yes i frequently update. I have the newest 6.2-4 installed
- Nothing else than docker which needs virtualization no. But just enabling windows' hyper V feature, breaks the VM.

- The pvereport -v is very long, the content is to long to be posted on here. Do you want a pastebin link?
 
Proxmox VE 6.2 comes with Qemu 5.0 and Kernel 5.4. Both will have changed things. Try to set the CPU to host only, since all flags of that host type should be passed through anyway. You can also try to boot into the Kernel 5.3 or roll back to pve-qemu-kvm package and see if that makes a difference.
 
Proxmox VE 6.2 comes with Qemu 5.0 and Kernel 5.4. Both will have changed things. Try to set the CPU to host only, since all flags of that host type should be passed through anyway. You can also try to boot into the Kernel 5.3 or roll back to pve-qemu-kvm package and see if that makes a difference.

I've allways chosen host for the cpu specs. The attached image is a snip from the buildserver vm
 

Attachments

  • vm_ifno.PNG
    vm_ifno.PNG
    43.9 KB · Views: 42
I havnt followed the known issues, no. However the problem is not starting the VM, rather that enabling windows hyper V results in a crash. Does the cpu flag hv-evmcs need to be on for virtualization to work? Because then i cannot start the vm
 
When the crash happens, do you see anything in the hosts logs (e.g. 'dmesg', 'journalctl -e')? Does the task log show more info for the failed start when you double click it in the GUI?

As for the 'hv-evmcs' flag: While it definitely should work on Intel hosts, do you really require it? Otherwise, at least for testing things one at a time, it might be best to leave it disabled for now. For docker containers running as L2, the performance boost is probably negligible anyway.. In general, just using 'host' should enable nested virtualization for the guest, no extra flags necessary.

Also, the output of grep -R "" /sys/module/kvm_intel/parameters and grep -R "" /sys/module/kvm/parameters might be useful, to see if nested virtualization is even enabled on the host.
 
This is the output from the grep command
grep -R "" /sys/module/kvm_intel/parameters

Code:
/sys/module/kvm_intel/parameters/enlightened_vmcs:N
/sys/module/kvm_intel/parameters/eptad:Y
/sys/module/kvm_intel/parameters/flexpriority:Y
/sys/module/kvm_intel/parameters/vmentry_l1d_flush:cond
/sys/module/kvm_intel/parameters/ple_window_shrink:0
/sys/module/kvm_intel/parameters/ept:Y
/sys/module/kvm_intel/parameters/ple_gap:128
/sys/module/kvm_intel/parameters/emulate_invalid_guest_state:Y
/sys/module/kvm_intel/parameters/pml:Y
/sys/module/kvm_intel/parameters/enable_apicv:N
/sys/module/kvm_intel/parameters/enable_shadow_vmcs:Y
/sys/module/kvm_intel/parameters/ple_window_max:4294967295
/sys/module/kvm_intel/parameters/ple_window:4096
/sys/module/kvm_intel/parameters/pt_mode:0
/sys/module/kvm_intel/parameters/nested:Y
/sys/module/kvm_intel/parameters/vnmi:Y
/sys/module/kvm_intel/parameters/vpid:Y
/sys/module/kvm_intel/parameters/preemption_timer:Y
/sys/module/kvm_intel/parameters/ple_window_grow:2
/sys/module/kvm_intel/parameters/dump_invalid_vmcs:N
/sys/module/kvm_intel/parameters/fasteoi:Y
/sys/module/kvm_intel/parameters/unrestricted_guest:Y
/sys/module/kvm_intel/parameters/nested_early_check:N

Other grep:
grep -R "" /sys/module/kvm/parameters
Code:
/sys/module/kvm/parameters/force_emulation_prefix:N
/sys/module/kvm/parameters/halt_poll_ns_shrink:0
/sys/module/kvm/parameters/report_ignored_msrs:N
/sys/module/kvm/parameters/enable_vmware_backdoor:N
/sys/module/kvm/parameters/halt_poll_ns:200000
/sys/module/kvm/parameters/kvmclock_periodic_sync:Y
/sys/module/kvm/parameters/halt_poll_ns_grow_start:10000
/sys/module/kvm/parameters/ignore_msrs:Y
/sys/module/kvm/parameters/nx_huge_pages_recovery_ratio:60
/sys/module/kvm/parameters/tsc_tolerance_ppm:250
/sys/module/kvm/parameters/min_timer_period_us:200
/sys/module/kvm/parameters/vector_hashing:Y
/sys/module/kvm/parameters/halt_poll_ns_grow:0
/sys/module/kvm/parameters/pi_inject_timer:0
/sys/module/kvm/parameters/nx_huge_pages:Y
/sys/module/kvm/parameters/lapic_timer_advance_ns:-1

and its not actually a vm crash. Its windows trying to repair itself after hyper V installment. So, on my part sorry, for not being clear enough. When i press start it boots, but cant load windows, since its broken.
 
Last edited:
As for the 'hv-evmcs' flag: While it definitely should work on Intel hosts, do you really require it? Otherwise, at least for testing things one at a time, it might be best to leave it disabled for now. For docker containers running as L2, the performance boost is probably negligible anyway.. In general, just using 'host' should enable nested virtualization for the guest, no extra flags necessary.
And thanks for clearing that up! :)
 
*Update*
I found out, that changing CPU from host to any other CPU type allows the vm to install docker and hyper V. Why is this the case? I need the vm to use CPU host, but this no longer works
 
Last edited:
Today I have upgrade with 'apt-get dist-upgrade' from pve-no-subscription and got same issues.
VM's with host cpu and flags=+hv-evmcs cannot be started any more. See error below.
Some time ago I have installed kernel 5.4.27-1-pve, so I boot back to the this kernel. VM still cannot be booted, same error.
Seems to be this is non kernel related issues. By clearing mentioned flag VM starts with CPU host on both kernels.
Nested virtualization seems to be OK, docker started successfully but is quite a slow.

kvm: error: failed to set MSR 0x48d to 0x7f00000016
kvm: /build/pve-qemu/pve-qemu-kvm-5.0.0/target/i386/kvm.c:2695: kvm_buf_set_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.
TASK ERROR: start failed: QEMU exited with code 1
 
Today I have upgrade with 'apt-get dist-upgrade' from pve-no-subscription and got same issues.
VM's with host cpu and flags=+hv-evmcs cannot be started any more. See error below.
Some time ago I have installed kernel 5.4.27-1-pve, so I boot back to the this kernel. VM still cannot be booted, same error.
Seems to be this is non kernel related issues. By clearing mentioned flag VM starts with CPU host on both kernels.
Nested virtualization seems to be OK, docker started successfully but is quite a slow.

kvm: error: failed to set MSR 0x48d to 0x7f00000016
kvm: /build/pve-qemu/pve-qemu-kvm-5.0.0/target/i386/kvm.c:2695: kvm_buf_set_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.
TASK ERROR: start failed: QEMU exited with code 1
hmm. So you reverted back to an older kernel and it worked fine?
 
"hv-evmcs" falg have to be removed, after that VM starts with cpu host.
With this config VM is booting on 5.4.x.

agent: 1
balloon: 0
bootdisk: scsi0
cores: 4
cpu: host
memory: 8192
name: Docker1
net0: virtio=8A:B5:07:A4:5B:5D,bridge=vmbr0
numa: 0
ostype: win10
sata0: none,media=cdrom
sata1: none,media=cdrom
scsi0: local-zfs:vm-101-disk-0,cache=writeback,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=97c29fff-98ce-4a90-8dd4-1d3ad9f4d2a1
sockets: 1
vmgenid: f4c098c2-dbbf-43ad-a4ee-424311bdbbde
 
I was experiencing the same error "kvm: error: failed to set MSR 0x48b to 0x137bff00000000" with the latest 6.2 kernel and using "host" cpu. Setting the CPU to SkyLake-Server allowed me to boot the machines. After reading the thread I narrowed down the problem to flag "-pcid".
I also have "mitigations=off" in my kernel command line so I am not sure if this has any connection. Setting pcid to default allows me to boot with cpu "host" again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!