I've spent the latter part of the last 24 hours trying to setup some VM's from backup onto PROXMOX 7.1-4. I was upgrading from an older version.
Relevant hardware:
Motherboard: Supermicro X10DRi-T (BIOS: 3.4a)
CPU(s): Intel Xeon E5-2698v3
Hardware pass-through worked great until I performed a BIOS update (BIOS 3.1 at the time) and upgraded to PROXMOX 7.1-4.
I enabled VT-d
I added
to
I ran
,
, and even
nothing is making the error go away.
If I run
I get the following output which my limited experience seems to indicate that IOMMU should be on and working...but it's not.
In addition to this I can't seem to get any vfio modules to load. If I add
to
and run
then reboot any script I write to over-ride the default kernel driver doesn't work properly. It just doesn't load any driver at all.
What's the real kicker here is I did the EXACT SAME THING on an AMD EPYC server and EVERYTHING went off without a hitch. Hardware pass-throughed a ton of stuff. No fuss. worked great.
So why did something previously working fine suddenly no longer want to work after updating some software?...at this point I"m wondering if I can just pass the devices through using iSCSI or virtio and get the same result. Let the PROXMOX kernel handle the PCI_e device.
Relevant hardware:
Motherboard: Supermicro X10DRi-T (BIOS: 3.4a)
CPU(s): Intel Xeon E5-2698v3
Hardware pass-through worked great until I performed a BIOS update (BIOS 3.1 at the time) and upgraded to PROXMOX 7.1-4.
I enabled VT-d
I added
Code:
intel_iommu=on
Code:
/etc/default/grub
Code:
update-grub
Code:
initramfs -u -k -all
Code:
proxmox-boot-tool refresh
If I run
Code:
dmesg | grep -e DMAR -e IOMMU
Code:
[ 0.012983] ACPI: DMAR 0x0000000079F573F8 000170 (v01 ALASKA A M I 00000001 INTL 20091013)
[ 0.013036] ACPI: Reserving DMAR table memory at [mem 0x79f573f8-0x79f57567]
[ 3.444698] DMAR: Host address width 46
[ 3.444701] DMAR: DRHD base: 0x000000fbffc000 flags: 0x0
[ 3.444709] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0466 ecap f020de
[ 3.444714] DMAR: DRHD base: 0x000000c7ffc000 flags: 0x1
[ 3.444719] DMAR: dmar1: reg_base_addr c7ffc000 ver 1:0 cap d2078c106f0466 ecap f020de
[ 3.444723] DMAR: RMRR base: 0x0000007bc18000 end: 0x0000007bc27fff
[ 3.444726] DMAR: ATSR flags: 0x0
[ 3.444729] DMAR: RHSA base: 0x000000c7ffc000 proximity domain: 0x0
[ 3.444731] DMAR: RHSA base: 0x000000fbffc000 proximity domain: 0x1
[ 3.444735] DMAR-IR: IOAPIC id 3 under DRHD base 0xfbffc000 IOMMU 0
[ 3.444738] DMAR-IR: IOAPIC id 1 under DRHD base 0xc7ffc000 IOMMU 1
[ 3.444741] DMAR-IR: IOAPIC id 2 under DRHD base 0xc7ffc000 IOMMU 1
[ 3.444743] DMAR-IR: HPET id 0 under DRHD base 0xc7ffc000
[ 3.444746] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[ 3.444747] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[ 3.445518] DMAR-IR: Enabled IRQ remapping in xapic mode
In addition to this I can't seem to get any vfio modules to load. If I add
Code:
vfio-pci
Code:
/etc/modules
Code:
initramfs -u -k -all
What's the real kicker here is I did the EXACT SAME THING on an AMD EPYC server and EVERYTHING went off without a hitch. Hardware pass-throughed a ton of stuff. No fuss. worked great.
So why did something previously working fine suddenly no longer want to work after updating some software?...at this point I"m wondering if I can just pass the devices through using iSCSI or virtio and get the same result. Let the PROXMOX kernel handle the PCI_e device.