balloon: 0
bios: ovmf
boot: order=sata0
cores: 4
efidisk0: Medium1:vm-101-disk-1,size=4M
hostpci0: 0000:09:00,pcie=1,x-vga=1
hostpci1: 0000:0b:00.4
machine: pc-q35-6.0
memory: 24000
name: Windows
net0: e1000=9A:D5:34snip:F3,bridge=vmbr0,firewall=1,queues=2
numa: 0
onboot: 1
ostype: win10
sata0: sdc1:vm-101-disk-0,backup=0,cache=writethrough,discard=on,size=100G
sata1: /dev/sda
sata2: /dev/sdd
scsihw: virtio-scsi-pci
smbios1: uuid=6snipac0b3
sockets: 1
tablet: 0
usb1: host=1-2,usb3=1
usb2: host=7-2.2,usb3=1
usb3: host=7-2.3,usb3=1
usb4: host=7-1
vga: none
vmgenid: 4707f4snip027d
And I believe exactly there lies the issue. When enabling CPU type 'host', the L1 main OS runs virtualized, because it finds the necessary CPU extensions and you have Hyper-V enabled. This of course reduces performance, since nested virt always has an extra overhead. When you then switch to kvm64, it doesn't make nested virt faster, it instead disables it entirely, which causes the L1 OS to realize it can't enable Hyper-V and runs non-virtualized and thus faster.Are you sure? I'm running windows with hyper-v enabled. When hyper-v is enabled the host os also runs on top of the Hyper-V virtualization layer, just as guest operating systems do.
And I believe exactly there lies the issue. When enabling CPU type 'host', the L1 main OS runs virtualized, because it finds the necessary CPU extensions and you have Hyper-V enabled. This of course reduces performance, since nested virt always has an extra overhead. When you then switch to kvm64, it doesn't make nested virt faster, it instead disables it entirely, which causes the L1 OS to realize it can't enable Hyper-V and runs non-virtualized and thus faster.
Try starting an actual Hyper-V VM on a nested setup with the CPU type set to kvm64.
/etc/modprobe.d/kvm_amd.conf
with options kvm-amd nested=N
in it, then regen your initramfs with update-initramfs -u
and reboot.