Still struggling to get iommu working on either of 2 older servers, please help if you can

Jack Freeman

New Member
May 28, 2019
27
0
1
59
Hi folks, I rtfm'd till i'm going blind, added the required lines to grub, updated, added the req modules, ran dmsg, got output, try to allow pci audio and one machine shows as iommu working in setup but refuses to boot with pci on any setting (tried them all) and the other simply shows as iommu as no enabled thru gui but says its installed and working according to dmsg output!! any suggestions where to go from here, servers are hp dl380g7 xeons with firepro gpu and supermicro amd opteron cpu and nv300 quadro gpu
 
Hello,

Your post lacks information, an example VM where iommu is not working, dmesg log, qm config <problem vmid>, and have a read of https://heiko-sieger.info/iommu-groups-what-you-need-to-consider/. That article should show you how to include, which device belongs to what IOMMU group in your reply. Remember to wrap diagnostic information in code tags.

A few caveats from my experience:

  1. Only the Q35 chipset inside the guest properly supports device passthrough
  2. You'll need to enable OVMF UEFI BIOS (AFAIK, legacy bios doesn't work)

Regards
 
Hello,

Your post lacks information, an example VM where iommu is not working, dmesg log, qm config <problem vmid>, and have a read of https://heiko-sieger.info/iommu-groups-what-you-need-to-consider/. That article should show you how to include, which device belongs to what IOMMU group in your reply. Remember to wrap diagnostic information in code tags.

A few caveats from my experience:

  1. Only the Q35 chipset inside the guest properly supports device passthrough
  2. You'll need to enable OVMF UEFI BIOS (AFAIK, legacy bios doesn't work)

Regards
bingo, last comment answers all my questions, both are '09 era legacy only sadly! ho hum, am here to learn so is all good info, if a little deflating for my ultimate goals, do you think I might have better success with usb passthrough on older hardware?? am only really looking for audio over novnc. I realise windows rem desk does audio emulation but mostly i work in linux, so will need to go down the pulse/virtual cables route there?
 
I'd never test this, I don't know why you would want to pass a usb device though?



Although USB is on the PCI BUS (correct me if I am wrong here), it's not a PCI device, you probably going to need to pass the USB Root Hub to the VM.



So? you haven't included any diagnostic information?
usb audio dongles are cheap as chips and would allow easier audio passthrough for linux and allow audio without remote desktop again because linux, rem desk has always been a stumbling block between linux and windows, always end up resorting to teamviwer but that doesn't do audio either!! As to responding to your request for greater detail, your 'older hardware' comment sounded fairly final and I do not wish to waste anyone's time here. I have diverted over to focusing on mastery of zfs and general prox file system commands and as a consequence I have entirely broken my array here several times!! (did I say I was a total noob?? just askin!) so have fully reinstalled BOTH sets of installations but if you feel its worth pursuing I will gladly provide greater detail if you can allow me a day or so to catch up with where I left off with that topic (bear of little brain ( 8-bit guy in a 64-bit world!)), would it be too much to ask for the dmsg instructions I will need as I am yet to familliarise myself with its commands, rarely used it before this year.
thanks again for your patience and assistance, you are most kind.
 
ok so output from; dmesg | grep -e DMAR -e IOMMU
supermicro amd opteron 32g ddr2 ecc

Code:
root@superone:~# dmesg | grep -e DMAR -e IOMMU
[    0.000000] DMAR: IOMMU enabled
[    1.486111] PCI-DMA: using GART IOMMU.
[    1.486122] PCI-DMA: Reserving 1024MB of IOMMU area in the AGP aperture
root@superone:~#


and from its twin sister with default iommu bios setting;


Code:
root@supertwo:~# dmesg | grep -e DMAR -e IOMMU
[    0.000000] AGP: Please enable the IOMMU option in the BIOS setup
[    1.478168] PCI-DMA: using GART IOMMU.
[    1.478171] PCI-DMA: Reserving 64MB of IOMMU area in the AGP aperture
{/code}


will grab the dmsg output from dl380 as soon as its next booted
thanks again for your patience
 
Last edited:
ok, output from dmsg on dl380 g7 16g ddr3 is thus;

Code:
root@dl380:~# dmesg | grep -e DMAR -e IOMMU
[    0.000000] ACPI: DMAR 0x00000000CF62FE80 000172 (v01 HP     ProLiant 00000001 \xffffffd2?   0000162E)
[    0.000000] DMAR-IR: This system BIOS has enabled interrupt remapping
[    1.093329] DMAR: Host address width 39
[    1.093330] DMAR: DRHD base: 0x000000d7ffe000 flags: 0x1
[    1.093350] DMAR: dmar0: reg_base_addr d7ffe000 ver 1:0 cap c90780106f0462 ecap f0207e
[    1.093352] DMAR: RMRR base: 0x000000cf7fc000 end: 0x000000cf7fdfff
[    1.093353] DMAR: RMRR base: 0x000000cf7f5000 end: 0x000000cf7fafff
[    1.093355] DMAR: RMRR base: 0x000000cf63e000 end: 0x000000cf63ffff
[    1.093356] DMAR: ATSR flags: 0x0

so to my eyes this is a case for those instructions from linky dink above^^ will give it my fullest attention fresh monday morning, am all just a bit sunday afternoon right now!
 
Found this using lspci -v command,

Output from one of the s'micro twins;
Code:
00:0a.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a3) (prog-if 00 [Normal decode])

Flags: bus master, fast devsel, latency 0, NUMA node 0

Bus: primary=00, secondary=02, subordinate=02, sec-latency=0

Capabilities: [40] Subsystem: NVIDIA Corporation MCP55 PCI Express bridge

Capabilities: [48] Power Management version 2

Capabilities: [50] MSI: Enable- Count=1/2 Maskable- 64bit+

Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed-

Capabilities: [80] Express Root Port (Slot+), MSI 00

Capabilities: [100] Virtual Channel

Kernel driver in use: pcieport

Kernel modules: shpchp



00:0f.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a3) (prog-if 00 [Normal decode])

Flags: bus master, fast devsel, latency 0, NUMA node 0

Bus: primary=00, secondary=03, subordinate=03, sec-latency=0

I/O behind bridge: 0000e000-0000efff

Memory behind bridge: fd000000-febfffff

Prefetchable memory behind bridge: 00000000f6000000-00000000fbffffff

Capabilities: [40] Subsystem: NVIDIA Corporation MCP55 PCI Express bridge

Capabilities: [48] Power Management version 2

Capabilities: [50] MSI: Enable- Count=1/2 Maskable- 64bit+

Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed-

Capabilities: [80] Express Root Port (Slot+), MSI 00

Capabilities: [100] Virtual Channel

Kernel driver in use: pcieport

Kernel modules: shpchp
and from same command on the dl380 g7;

Code:
00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 13) (prog-if 00 [Normal decode])

Flags: bus master, fast devsel, latency 0

Bus: primary=00, secondary=05, subordinate=05, sec-latency=0

I/O behind bridge: 00004000-00004fff

Memory behind bridge: fbb00000-fbdfffff

Capabilities: [40] Subsystem: Hewlett-Packard Company ProLiant G6 series

Capabilities: [60] MSI: Enable- Count=1/2 Maskable+ 64bit-

Capabilities: [90] Express Root Port (Slot-), MSI 00

Capabilities: [e0] Power Management version 3

Capabilities: [100] Advanced Error Reporting

Capabilities: [150] Access Control Services

Capabilities: [160] Vendor Specific Information: ID=0002 Rev=0 Len=00c <?>

Kernel driver in use: pcieport

Kernel modules: shpchp



00:02.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 2 (rev 13) (prog-if 00 [Normal decode])

Flags: bus master, fast devsel, latency 0

Bus: primary=00, secondary=06, subordinate=06, sec-latency=0

Capabilities: [40] Subsystem: Hewlett-Packard Company 5520/5500/X58 I/O Hub PCI Express Root Port 2

Capabilities: [60] MSI: Enable- Count=1/2 Maskable+ 64bit-

Capabilities: [90] Express Root Port (Slot-), MSI 00

Capabilities: [e0] Power Management version 3

Capabilities: [100] Advanced Error Reporting

Capabilities: [150] Access Control Services

Kernel driver in use: pcieport

Kernel modules: shpchp

there are more pci bridges listed but I assume they will all be v similar if not identical, this is a twin cpu node.
 
*bump
finished fully reinstalling/updating all nodes having converted to assorted zfs formats. So far it looks likely this should be a resolvable problem and I will ensure to document the process and repost here as a tutorial if a solution can be found. If any of the devs could glance over the thread and give a few hints for me to know where to go next. There is a metric ton of this older capable hardware hitting the markets right now so is likely to be a recurrent thread if not solved soon imo.
thanks again to thorace for his contribution, anyone else have any ideas?
 
Did you find a solution yet? I also have an HP Proliant G6 server and haven't been able to get the Intel Virtualization technology for directed I/O message despite having done everything needed to get it enabled.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!