[SOLVED] not able to see IOMMU in Flag section while giving "lspci -v" for GPU, is it a issue of not seeing IOMMU in FLAG section

jjk.saji

Member
Jan 8, 2021
21
0
6
57
Dear All,
On my supermicro server , I have NVIDIA GPU which is
GPU-NVA10-NC NVIDIA A10 24GB GDDR6 PCIe 4.0
'When I check "lspci -v" I get the following results with the IOMMU group details

root@pve-7:~# lspci -v | grep NVIDIA

Code:
ca:00.0 3D controller: NVIDIA Corporation GA102GL [A10] (rev a1)
    Subsystem: NVIDIA Corporation GA102GL [A10]
    Physical Slot: 1
    Flags: bus master, fast devsel, latency 0, IRQ 18, NUMA node 1
    Memory at ef000000 (32-bit, non-prefetchable) [size=16M]
    Memory at 222000000000 (64-bit, prefetchable) [size=32G]
    Memory at 223840000000 (64-bit, prefetchable) [size=32M]
    Capabilities: [60] Power Management version 3
    Capabilities: [68] Null
    Capabilities: [78] Express Endpoint, MSI 00
    Capabilities: [b4] Vendor Specific Information: Len=14 <?>
    Capabilities: [c8] MSI-X: Enable- Count=6 Masked-
    Capabilities: [100] Virtual Channel
    Capabilities: [250] Latency Tolerance Reporting
    Capabilities: [258] L1 PM Substates
    Capabilities: [128] Power Budgeting <?>
    Capabilities: [420] Advanced Error Reporting
    Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
    Capabilities: [900] Secondary PCI Express
    Capabilities: [bb0] Physical Resizable BAR
    Capabilities: [bcc] Single Root I/O Virtualization (SR-IOV)
    Capabilities: [c14] Alternative Routing-ID Interpretation (ARI)
    Capabilities: [c1c] Physical Layer 16.0 GT/s <?>
    Capabilities: [d00] Lane Margining at the Receiver <?>
    Capabilities: [e00] Data Link Feature <?>
    Kernel driver in use: nouveau
    Kernel modules: nvidiafb, nouveau


Is it mandatory that IOMMU group details need to be there while giving lspci -v,

I am doing ground work for GPU pass through, this exercise is part of the effort to do GPU passthrough

Advice and support requested

Thanks
Joseph John
 
Thanks,
Shall I modify the line to
initrd=\EFI\proxmox\6.2.16-3-pve\initrd.img-6.2.16-3-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on

the intel_iommu=on , appended on the end of the line, shall I do it and restart
thanks
Joseph John
 
I realized I cannot edit /proc/cmdline file directly, how could I change the value, I am trying to figure on how to change the value indirctly without editing the /proc/cmdline
Thanks
Joseph John
 
Dear All,
Good afternoon
We would like to mention the steps which we have done to make the NVIDIA GPU passthough
We refereed the wiki, forum and the docs, not success. Like to mention out the steps which we have done for the 2 servers which we had
in
/etc/default/grub
we made the entry
GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on" GRUB_CMDLINE_LINUX="quiet intel_iommu=on"

in the first machine we
root@pve-7:~# cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs GRUB_CMDLINE_LINUX_DEFAULT=“quiet intel_iommu=on” iommu=pt
root@pve-7:~#

in the second machine for the “/etc/kernel/cmdline” we did
root@pve-8:~# cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on

All the option which has been specified in the URL (https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot_edit_kernel_cmdline) we tried
in both the servers we are getting this message in
cat /proc/cmdline
initrd=\EFI\proxmox\6.2.16-8-pve\initrd.img-6.2.16-8-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs

Like to add that in CMOS setting of the server we have done the appropriate setting
https://www.supermicro.com/support/faqs/faq.cfm?faq=14053

Not sure it is a bug or something we missed out
Any advice on this topic would be great help
thanks
Joseph john
 
Last edited:
in the first machine we
root@pve-7:~# cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs GRUB_CMDLINE_LINUX_DEFAULT=“quiet intel_iommu=on” iommu=pt
This is completely wrong. I think you want root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on on the first line. Make sure to put no other lines in the file.
in the second machine for the “/etc/kernel/cmdline” we did
root@pve-8:~# cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on
It is not clear whether you put all this on the first line, otherwise it will not work.
The manual that you refer to states that it "needs to be placed as one line", which I hope it clear now.
in both the servers we are getting this message in
cat /proc/cmdline
initrd=\EFI\proxmox\6.2.16-8-pve\initrd.img-6.2.16-8-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs

Not sure it is a bug or something we missed out
Any advice on this topic would be great help
Did you read PCI(e) passthrough section of the manual, which I send you earlier? It explains that you also need to run update-initramfs -u adn reboot. There are also additional steps to do to setup PCI(e) passthrough.
 
  • Like
Reactions: joseph.john
THANKS Leesteken,
We have done the update-initrams -u -k all multiple time , for the last few days
Attaching the bash history

update-initrams.png


Like to add I have 2 of this machine, and we did the same steps multiple time on each machine,
on the first machine /etc/kernel/cmdline we modified and kept it as one line
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on

the second machine, was already with single line for "/etc/kernel/cmdline"
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on


Much appreciated your guidance.
Thanks
Joseph John
 
Last edited:
THANKS a lot Leesteken
I did a fresh installation on the same supermicro hardware, followed your advice and I am now able to see the results
THANKS , highly appreciated
Screen shot attached
IOMMU-Working.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!