Nested virtualization: Issues enabling Hyper-V in win server VM

Jun 30, 2023
15
5
8
Our software development team needs a Windows Server VM to host AppVeyor, a CI/CD platform that requires Hyper-V to dynamically spin up and tear down Windows guest VMs for build and test pipelines. I've run many Windows Server VMs on this Proxmox host without issue, but this is my first time needing to enable Hyper-V inside a VM (nested virtualization). After enabling the Hyper-V role in Windows Server and initiating the required reboot, the VM enters a boot loop. It attempts to restart several times before dropping into Windows Recovery Mode, from which it cannot recover. The VM is essentially bricked at that point.

I've reproduced this twice. My current workaround is snapshotting before enabling Hyper-V so I can roll back, but I have not been able to get a working Hyper-V environment.

here are the Hardware specs and the VM config
PC: Dell T340
CPU: E-2278G
Ram: 64 GB
Proxmox 9.1.6 on the Production ready Repo
Kernel: 6.17.4-2-pve

agent: 1
allow-ksm: 0
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;ide0;net0
cores: 8
cpu: host,flags=+nested-virt
efidisk0: local-zfs:vm-201-disk-0,efitype=4m,ms-cert=2023w,pre-enrolled-keys=1,size=1M
machine: pc-q35-10.1
memory: 16384
numa: 0
sockets: 1
scsihw: virtio-scsi-single
scsi0: local-zfs:vm-201-disk-1,discard=on,iothread=1,size=64G,ssd=1
tpmstate0: local-zfs:vm-201-disk-2,size=4M,version=v2.0
net0: virtio=BC:24:11:67:B6:DE,bridge=vmbr0,firewall=1

Has anyone successfully run Hyper-V inside a Proxmox VM with a similar config? Any guidance would be greatly appreciated.
 
Please try specifying the following.

*Please remove the `+nested-virt` flag.

Code:
qm set <vmid> -args '-cpu host,hv_passthrough,-hypervisor,level=30,+vmx

*If you don't configure it, it will loop during the reboot after adding roles like RDS or Hyper-V.
 
Last edited:
Please try specifying the following.

*Please remove the `+nested-virt` flag.

Code:
qm set <vmid> -args '-cpu host,hv_passthrough,-hypervisor,level=30,+vmx

*If you don't configure it, it will loop during the reboot after adding roles like RDS or Hyper-V.


Well, not boot looping anymore it appears as that still doesn't work. Passing those args now causes the boot to hang after enabling hyper-v. It is just sitting here with the progress bar are 90% . Is it because I am using a q35 machine and not a seabios? Snap1.png
 
Last edited:
Try removing `level=30`. Since my CPU is different, it might not behave the same way.

If that still doesn't work, undo the changes and use x86-64-v2 if it runs on that architecture.

I am running Windows Server 2025 Hyper-V on a Core Ultra 265K with the following configuration for verification purposes.

Code:
agent: 1
allow-ksm: 0
args: -cpu host,hv_passthrough,-hypervisor,level=30,+vmx
balloon: 0
bios: ovmf
boot: order=ide0;ide2
cores: 4
cpu: host
efidisk0: zoi-all:vm-1171-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide0: none,media=cdrom
ide2: none,media=cdrom
machine: pc-q35-9.0
memory: 24576
meta: creation-qemu=9.0.0,ctime=1719914219
name: w2k25-eval1
numa: 0
ostype: win11
parent: default
scsi0: zoi-all:vm-1171-disk-1,iothread=1,size=40G
scsihw: virtio-scsi-single
smbios1: uuid=
sockets: 1
tags: w2k25
vmgenid:

*Since Coffee Lake's CPUID only goes up to 22, specifying 30 might cause it to fail to boot if it isn't ignored.

Similar issues
 
Last edited:
Try removing `level=30`. Since my CPU is different, it might not behave the same way.

If that still doesn't work, undo the changes and use x86-64-v2 if it runs on that architecture.
removing the level=30 didn't work but changing the cpu to x86-64-v2-aes did work. Which is interesting because I had tried that and it didn't work but this time it did. I seem to be up and running now thank you for you help.