Yet Another Nested Virtualization Question

elBradford

Renowned Member
Sep 9, 2016
24
4
68
bradford.la
I have been trying to get a nested VM working for a few days with little success.

Host is Proxmox 8

Steps I've taken:

Confirmed the following:

Bash:
# dmesg | grep -E "DMAR|IOMMU"
[    0.000000] ACPI: DMAR 0x0000000078EB1D98 000168 (v01 SUPERM SMCI--MB 00000001 INTL 20091013)
[    0.000000] ACPI: Reserving DMAR table memory at [mem 0x78eb1d98-0x78eb1eff]
[    0.000000] DMAR: IOMMU enabled
[    0.000000] DMAR: Host address width 46
[    0.000000] DMAR: DRHD base: 0x000000fbffc000 flags: 0x0
[    0.000000] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
[    0.000000] DMAR: DRHD base: 0x000000c7ffc000 flags: 0x1
[    0.000000] DMAR: dmar1: reg_base_addr c7ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
[    0.000000] DMAR: RMRR base: 0x0000007b248000 end: 0x0000007b257fff
[    0.000000] DMAR: ATSR flags: 0x0
[    0.000000] DMAR: RHSA base: 0x000000c7ffc000 proximity domain: 0x0
[    0.000000] DMAR: RHSA base: 0x000000fbffc000 proximity domain: 0x1
[    0.000000] DMAR-IR: IOAPIC id 3 under DRHD base  0xfbffc000 IOMMU 0
[    0.000000] DMAR-IR: IOAPIC id 1 under DRHD base  0xc7ffc000 IOMMU 1
[    0.000000] DMAR-IR: IOAPIC id 2 under DRHD base  0xc7ffc000 IOMMU 1
[    0.000000] DMAR-IR: HPET id 0 under DRHD base 0xc7ffc000
[    0.000000] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.000000] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.752744] DMAR: [Firmware Bug]: RMRR entry for device 83:00.2 is broken - applying workaround
[    0.752750] DMAR: No SATC found
[    0.752754] DMAR: dmar0: Using Queued invalidation
[    0.752768] DMAR: dmar1: Using Queued invalidation
[    0.766444] DMAR: Intel(R) Virtualization Technology for Directed I/O

# cat /sys/module/kvm_intel/parameters/nested
Y

AFAIK, these are the features needed for virtualization nesting.

Here is my VM config:

Code:
balloon: 2048
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
efidisk0: local-nvme0:vm-999-disk-0,efitype=4m,size=4M
ide2: local:iso/media.iso,media=cdrom,size=5890M
kvm: 1
machine: q35
memory: 16384
meta: creation-qemu=8.0.2,ctime=1699888831
name: argonaut
net0: virtio=36:AC:30:3A:21:E3,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-nvme0:vm-999-disk-1,backup=0,size=128G
smbios1: uuid=c57956c9-a0b1-4ca6-a718-d244128caae8
sockets: 1
vga: qxl
vmgenid: 213a3889-2056-483e-9612-421970f2d12e

Everything I've read so far indicates that these settings should allow vmx passthrough, however when I cat /proc/cpuinfo on the guest, I don't see any vmx flags. It shows up as the same CPU, however flags are different (fewer) on the guest, and no vmx section as on the host.

What could I be missing?

I'm hoping that once I can get vmx to passthrough, I can also use vIOMMU as discussed here: https://forum.proxmox.com/threads/proxmox-ve-8-0-beta-released.128677/#post-563099
 
Last edited:
Not a silly question. The host has 4x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz. Here's the output of my cpuinfo:

Bash:
❯ cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
stepping        : 1
microcode       : 0xb000040
cpu MHz         : 2902.455
cache size      : 35840 KB
physical id     : 0
siblings        : 28
core id         : 0
cpu cores       : 14
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 20
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
vmx flags       : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs pml
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit mmio_stale_data
bogomips        : 4800.00
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

... same thing 55 more times

I believe that and the output of dmesg in my OP demonstrate that virtualization is working on the host, but missing something that should let it pass through to the guest.
 
Last edited:
On my other proxmox server, running a consumer ASRockRack board with a AMD Ryzen 7 3800X processor, I can get SVM/Virtualization passed through to a vm with the same settings. Any advice on what else to check on my Supermicro board with Intel Xeon(R) CPU E5-2680 v4 processors?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!