Persistent Nouveau/Nvidiafb Driver Loading Preventing GTX 1080 Ti Passthrough (PVE 8.x)

Finnedsgang

New Member
Jun 19, 2025
9
1
3
Hello Proxmox Community,
I'm encountering a very stubborn issue trying to achieve PCI passthrough for my NVIDIA GTX 1080 Ti to a VM, and I'm seeking your expertise. Despite following all standard guides and troubleshooting steps, the nouveau and nvidiafb kernel modules keep loading for the GPU on the Proxmox host.


TL;DR:

GTX 1080 Ti passthrough to VM is failing on Proxmox VE (kernel 6.8.12-9-pve). Even after extensive blacklisting (GRUB, modprobe, initramfs) and correct BIOS settings (iGPU primary, VT-d, Above 4G Decoding), lspci -nnk still shows Kernel modules: nvidiafb, nouveau loaded for the 1080 Ti. Why is nouveau so persistent? Seeking solutions for this issue, and/or alternative ways to "share" the GPU between gaming (Batocera) and transcoding (Plex) VMs (understanding direct passthrough is exclusive).


My Goal:
Ultimately, I'd like to use the GTX 1080 Ti for a retro gaming VM (Batocera) to achieve good performance. I also have a Plex VM (VM 103) that could benefit from hardware transcoding. I understand that standard PCI passthrough assigns a device exclusively to one VM at a time, making it impossible to share a single dGPU between two VMs simultaneously. My question to the community is whether there are alternative methods (e.g., vGPU, SR-IOV, or specific software configurations) that might enable some form of GPU sharing for different workloads, or if my best bet is to dedicate it to one VM and use software transcoding/iGPU for the other.

My System Specifications:
* Proxmox VE Version: PVE 8.x (kernel 6.8.12-9-pve)
* CPU: Intel Core i3-6100 (with Intel HD Graphics 530 iGPU)
* RAM: 16GB
* GPU for Passthrough: NVIDIA GeForce GTX 1080 Ti (01:00.0 - 10de:1b06) and its HDMI Audio Controller (01:00.1 - 10de:10ef)
* Storage: 512GB NVMe (Proxmox OS), 3TB HDD, 14TB HDD
* Existing VMs/CTs: VM 101 (qBittorrent), VM 102 (OMV), VM 103 (Plex). Plex and qBittorrent share a bind mount /mnt/media on the 14TB drive.


Problem Description:
No matter what I try, the nouveau and nvidiafb kernel modules consistently load for the GTX 1080 Ti on the Proxmox host, preventing it from being bound by vfio-pci and thus making passthrough impossible. lspci -nnk always shows Kernel modules: nvidiafb, nouveau for the GPU.


Steps I have taken so far (with reboots after each major change):
* PCI ID Identification: Confirmed GPU (01:00.0 / 10de:1b06) and HDMI Audio (01:00.1 / 10de:10ef).
* IOMMU Grouping: Confirmed both GPU and its HDMI Audio are in the same IOMMU group 1, which is ideal.
* Output from find /sys/kernel/iommu_groups/:
IOMMU group 1
00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation GP102 HDMI Audio Controller [10de:10ef] (rev a1)

* GRUB Parameters (/etc/default/grub):
* GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt rd.driver.blacklist=nouveau,nvidiafb nomodeset nouveau.modeset=0"
* update-grub executed after each modification.
* Confirmed parameters are in /boot/grub/grub.cfg.
* Modprobe Blacklisting (/etc/modprobe.d/):
* Created /etc/modprobe.d/vfio-pci.conf with:
options vfio-pci ids=10de:1b06,10de:10ef
softdep nouveau pre: vfio-pci
softdep nvidiafb pre: vfio-pci

* Modified /etc/modprobe.d/blacklist.conf to include:
blacklist nouveau
blacklist nvidia
blacklist nvidiafb

* VFIO Modules (/etc/modules):
* Ensured these lines are present:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

* update-initramfs:
* update-initramfs -u -k all executed multiple times after each configuration change.
* Direct initramfs Modification:
* Extracted initrd.img-$(uname -r) to a temporary directory.
* Manually edited etc/modprobe.d/blacklist.conf inside the extracted initramfs to include blacklist nouveau and blacklist nvidiafb.
* Rebuilt the initramfs image using mkinitramfs.
* BIOS/UEFI Settings (ASUS UEFI BIOS Utility - Advanced Mode):
* Primary Display: Set to CPU Graphics.
* IGPU Multi-Monitor: Enabled.
* VT-d: Enabled.
* Above 4G Decoding: Enabled.
* ASPM and other PCI Express Native Power Management options: Disabled.
* Memory Remap: Enabled.
* CFG Lock: Disabled.


Current lspci -nnk Output (after all steps and reboot):
00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 530 [8086:1912] (rev 06)
00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]
Subsystem: ASUSTeK Computer Inc. 200 Series PCH HD Audio [1043:8723]
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)
Subsystem: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:120f]
Kernel modules: nvidiafb, nouveau <-- STILL PRESENT!
01:00.1 Audio device [0403]: NVIDIA Corporation GP102 HDMI Audio Controller [10de:10ef] (rev a1)
Subsystem: NVIDIA Corporation GP102 HDMI Audio Controller [10de:120f]

My Questions to the Community:
* Given all the extensive blacklisting efforts, why is nouveau (and nvidiafb) still loading for the GTX 1080 Ti? What could be causing such persistent driver loading despite GRUB parameters, modprobe rules, and initramfs modifications?
* Are there any other BIOS/UEFI settings on ASUS motherboards that are known to interfere with NVIDIA passthrough in such a stubborn way?
* If standard passthrough remains impossible for the 1080 Ti on this system, what are the best strategies to utilize the GPU for gaming (Batocera) and transcoding (Plex) without dedicated passthrough? Are there any viable software solutions or specific Proxmox features (other than exclusive passthrough) that allow for some form of GPU sharing or efficient use across multiple VMs/services?
Any insights or suggestions would be greatly appreciated! Thank you for your time and help.
 
lspci -nnk always shows Kernel modules: nvidiafb, nouveau for the GPU.
This is normal. As long as there is no line with Kernel driver in use: then they are not loaded.
* GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt rd.driver.blacklist=nouveau,nvidiafb nomodeset nouveau.modeset=0"
intel_iommu=on is no longer needed. Blacklisting is also not needed when using early binding to vfio-pci. Make sure you edited the right kernel command line as Proxmox can have different booloaders. nouveau.modeset=0 makes no sense when you are blacklisting or early binding. Check whether your parameters are active with cat /proc/cmdline.
* Created /etc/modprobe.d/vfio-pci.conf with:
options vfio-pci ids=10de:1b06,10de:10ef
softdep nouveau pre: vfio-pci
softdep nvidiafb pre: vfio-pci
This looks good andd should early bind the device functions to vfio-pci. Check whether it's active with lspci -nnks 01:00 after a Proxmox reboot before starting the VM and it should show Kernel driver in use: vfio-pci.
* Modified /etc/modprobe.d/blacklist.conf to include:
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
Blacklisting is not necessary when using early binding to vfio-pci.
* VFIO Modules (/etc/modules):
* Ensured these lines are present:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
You must have been using an old guiide vfio_virqfd no longer exists: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough
 
Last edited:
  • Like
Reactions: Finnedsgang
This is normal. As long as there is no line with Kernel driver in use: then they are not loaded.

intel_iommu=on is no longer needed. Blacklisting is also not needed when using early binding to vfio-pci. Make sure you edited the right kernel command line as Proxmox can have different booloaders. nouveau.modeset=0 makes no sense when you are blacklisting or early binding. Check whether your parameters are active with cat /proc/cmdline.

This looks good andd should early bind the device functions to vfio-pci. Check whether it's active with lspci -nnks 01:00 after a Proxmox reboot before starting the VM and it should show Kernel driver in use: vfio-pci.

Blacklisting is not necessary when using early binding to vfio-pci.

You must have been using an old guiide vfio_virqfd no longer exists: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough
Just wanted to post an update and a SOLUTION to my persistent GPU passthrough issue, and a massive thank you to @leesteken for their absolutely crucial insights!

My core problem was a misunderstanding: I was debugging the presence of Kernel modules: nvidiafb, nouveau in lspci -nnk, thinking it meant the drivers were in use by the host. As @leesteken correctly pointed out, Kernel modules: only indicates what's available. The key is to see Kernel driver in use: vfio-pci (or no Kernel driver in use: line at all for that device).

The Solution:The issue was resolved by cleaning up my passthrough configuration and focusing on early binding the GPU devices to vfio-pci.

Here's what worked:

  1. Cleaned /etc/default/grub: Removed rd.driver.blacklist, nomodeset, nouveau.modeset=0 from GRUB_CMDLINE_LINUX_DEFAULT. It now only contains quiet intel_iommu=on iommu=pt. Then update-grub.
  2. Cleaned /etc/modprobe.d/blacklist.conf: Removed all nouveau/nvidia blacklist lines as they were redundant.
  3. Cleaned /etc/modules: Removed the deprecated vfio_virqfd module.
  4. Crucially, confirmed /etc/modprobe.d/vfio-pci.conf was set for early binding: Ran update-initramfs -u -k all and reboot.

    options vfio-pci ids=10de:1b06,10de:10ef disable_vga=1<br>softdep nouveau pre: vfio-pci<br>softdep nvidiafb pre: vfio-pci<br>
Verification (Success!):After these changes and reboot, lspci -nnks 01:00 now correctly shows:

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)<br> Subsystem: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:120f]<br> Kernel driver in use: vfio-pci<br> Kernel modules: nvidiafb, nouveau<br>01:00.1 Audio device [0403]: NVIDIA Corporation GP102 HDMI Audio Controller [10de:10ef] (rev a1)<br> Subsystem: NVIDIA Corporation GP102 HDMI Audio Controller [10de:120f]<br> Kernel driver in use: vfio-pci<br> Kernel modules: snd_hda_intel<br>
The GTX 1080 Ti is now successfully bound to vfio-pci and ready for passthrough!

Massive thanks again to @leesteken for clearing up my misconception and pointing me in the right direction. This community is amazing!
 
  • Like
Reactions: leesteken
Hello again,

Quick update on my previous GPU passthrough issue. Thanks to @leesteken's invaluable help, the initial problem of persistent nouveau loading for my GTX 1080 Ti is SOLVED! The card is now correctly bound to vfio-pci on the host, as verified by lspci -nnks 01:00 showing Kernel driver in use: vfio-pci. Thank you so much!

I have successfully reinstalled Proxmox VE 8.x with ZFS-on-root (I had an issue with the available space) , restored all my LXCs (n8n, qBittorrent, Plex, smbshhare) and my OMV VM. All data disks (3TB and 14TB) are correctly mounted and accessible by their respective containers/VMs, and iGPU passthrough for Plex (LXC) is working perfectly.


Now, I'm facing a new challenge with my dedicated gaming VM and the GTX 1080 Ti passthrough.

VM Details:


  • VM ID: 106
  • Name: Gaming Station (intended for Batocera)
  • Guest OS: Batocera x86_64 (imported as a .qcow2 disk image from /var/tmp/ to local-zfs)
  • VM Configuration (qm config 106):

    Code:
    balloon: 0
    bios: ovmf
    boot: order=scsi0;ide2;net0<br>cores: 4
    cpu: host
    efidisk0: local-zfs:vm-106-disk-0,efitype=4m,size=1M
    hostpci0: 0000:01:00.0;01:00.1,pcie=1,rombar=0,x-vga=on,romfile=/usr/share/kvm/NVIDIA.GTX1080Ti.11264.170118.rom'
    ide2: none,media=cdrom<br>machine: q35<br>memory: 8192
    meta: creation-qemu=9.2.0,ctime=1751389568
    name: GamingStation<br>net0: virtio=BC:24:11:0F:E2:FC,bridge=vmbr0,firewall=1
    numa: 0
    ostype: l26
    scsi0: local-zfs:vm-106-disk-1,iothread=1,size=32G
    scsihw: virtio-scsi-single
    smbios1: uuid=b47e1f31-09bc-4bde-9254-95a2ccfdc31f
    sockets: 1
    vga: none
    vmgenid: cfd3d19f-4af8-454d-bd8e-193e1f855bd9

  • Host GPU Status (lspci -nnks 01:00):

    01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)
    Subsystem: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:120f]
    Kernel driver in use: vfio-pci
    Kernel modules: nvidiafb, nouveau
    01:00.1 Audio device [0403]: NVIDIA Corporation GP102 HDMI Audio Controller [10de:10ef] (rev a1)
    Subsystem: NVIDIA Corporation GP102 HDMI Audio Controller [10de:120f]
    Kernel driver in use: vfio-pci
    Kernel modules: snd_hda_intel<br>
    (This confirms the GPU is correctly bound to vfio-pci on the host right?)
The New Problem: When I start VM 106 ("Gaming Station"):

  1. The Proxmox GUI shows its status as "running".
  2. However, the physical monitor connected directly to the GTX 1080 Ti shows absolutely nothing (black screen).
  3. The VM's summary in Proxmox GUI does not display an IP address.
  4. The VNC console for the VM shows "cannot connect to server, NO VNC image" (which is expected, as vga: none).
  5. Crucially, journalctl -u qemu-server@106.service | tail -n 100 shows -- No entries -- or no relevant output, indicating that the QEMU process for the VM might not be successfully starting, or it's crashing immediately without logging.
Diagnostic Logs from Proxmox Host (while VM 106 is running):

  • dmesg | grep -iE 'vfio|iommu|error|gpu|nvidia|vga|01:00' output:
    Code:
    [   0.078205] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
    [   0.242775] pci 0000:01:00.0: [10de:1b06] type 00 class 0x030000 PCIe Legacy Endpoint
    [   0.242797] pci 0000:01:00.0: BAR 0 [mem 0xf6000000-0xf6ffffff]
    [   0.242801] pci 0000:01:00.0: BAR 1 [mem 0xe0000000-0xefffffff 64bit pref]
    [   0.242806] pci 0000:01:00.0: BAR 3 [mem 0xf0000000-0xf1ffffff 64bit pref]
    [   0.242809] pci 0000:01:00.0: BAR 5 [io  0xe000-0xe07f]
    [   0.242813] pci 0000:01:00.0: ROM [mem 0xf7000000-0xf707ffff pref]
    [   0.242945] pci 0000:01:00.1: [10de:10ef] type 00 class 0x040300 PCIe Endpoint
    [   0.242968] pci 0000:01:00.1: BAR 0 [mem 0xf7080000-0xf7083fff]
    [   0.253315] iommu: Default domain type: Translated
    [   0.253315] iommu: DMA domain TLB invalidation policy: lazy mode
    [   0.260148] pci 0000:00:02.0: vgaarb: setting as boot VGA device
    [   0.260153] pci 0000:00:02.0: vgaarb: bridge control possible
    [   0.260155] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
    [   0.260163] pci 0000:01:00.0: vgaarb: bridge control possible
    [   0.260166] pci 0000:01:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none
    [   0.260170] vgaarb: loaded
    [   0.288084] pci 0000:01:00.1: extending delay after power-on from D3hot to 20 msec
    [   0.288121] pci 0000:01:00.1: D0 power state depends on 0000:01:00.0
    [   0.288630] DMAR: IOMMU feature fl1gp_support inconsistent
    [   0.288631] DMAR: IOMMU feature pgsel_inv inconsistent
    [   0.288634] DMAR: IOMMU feature nwfs inconsistent
    [   0.288636] DMAR: IOMMU feature eafs inconsistent
    [   0.288639] DMAR: IOMMU feature prs inconsistent
    [   0.288641] DMAR: IOMMU feature nest inconsistent
    [   0.288643] DMAR: IOMMU feature mts inconsistent
    [   0.288645] DMAR: IOMMU feature sc_support inconsistent
    [   0.288647] DMAR: IOMMU feature dev_iotlb_support inconsistent
    [   0.288918] pci 0000:00:02.0: Adding to iommu group 0
    [   0.289456] pci 0000:00:00.0: Adding to iommu group 1
    [   0.289470] pci 0000:00:01.0: Adding to iommu group 2
    [   0.289482] pci 0000:00:14.0: Adding to iommu group 3
    [   0.289494] pci 0000:00:16.0: Adding to iommu group 4
    [   0.289503] pci 0000:00:17.0: Adding to iommu group 5
    [   0.289517] pci 0000:00:1b.0: Adding to iommu group 6
    [   0.289532] pci 0000:00:1c.0: Adding to iommu group 7
    [   0.289542] pci 0000:00:1d.0: Adding to iommu group 8
    [   0.289552] pci 0000:00:1d.2: Adding to iommu group 9
    [   0.289572] pci 0000:00:1f.0: Adding to iommu group 10
    [   0.289582] pci 0000:00:1f.2: Adding to iommu group 10
    [   0.289592] pci 0000:00:1f.3: Adding to iommu group 10
    [   0.289601] pci 0000:00:1f.4: Adding to iommu group 10
    [   0.289610] pci 0000:00:1f.6: Adding to iommu group 11
    [   0.289616] pci 0000:01:00.0: Adding to iommu group 2
    [   0.289622] pci 0000:01:00.1: Adding to iommu group 2
    [   0.289633] pci 0000:04:00.0: Adding to iommu group 12
    [   0.289643] pci 0000:05:00.0: Adding to iommu group 13
    [   0.674922] RAS: Correctable Errors collector initialized.
    [   4.289265] VFIO - User Level meta-driver version: 0.3
    [   4.305844] vfio-pci 0000:01:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=none
    [   4.307383] vfio_pci: add [10de:1b06[ffffffff:ffffffff]] class 0x000000/00000000
    [   4.701194] RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules
    [   5.480171] i915 0000:00:02.0: vgaarb: deactivate vga console
    [   5.481280] i915 0000:00:02.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=io+mem
    [   5.481296] vfio-pci 0000:01:00.0: vgaarb: VGA decodes changed: olddecodes=none,decodes=none:owns=none
    [   5.501881] vfio_pci: add [10de:10ef[ffffffff:ffffffff]] class 0x000000/00000000
    [   6.343360] audit: type=1400 audit(1751366411.501:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=731 comm="apparmor_parser"
    [   6.343416] audit: type=1400 audit(1751366411.501:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe//kmod" pid=731 comm="apparmor_parser"
    [  15.602304] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
    [  21.433538] audit: type=1400 audit(1751366426.237:32): apparmor="STATUS" operation="profile_load" label="lxc-103_</var/lib/lxc>//&:lxc-103_<-var-lib-lxc>:unconfined" name="nvidia_modprobe" pid=2067 comm="apparmor_parser"
    [  21.433585] audit: type=1400 audit(1751366426.237:33): apparmor="STATUS" operation="profile_load" label="lxc-103_</var/lib/lxc>//&:lxc-103_<-var-lib-lxc>:unconfined" name="nvidia_modprobe//kmod" pid=2067 comm="apparmor_parser"
    [ 1854.775217] audit: type=1400 audit(1751368259.533:63): apparmor="STATUS" operation="profile_load" label="lxc-103_</var/lib/lxc>//&:lxc-103_<-var-lib-lxc>:unconfined" name="nvidia_modprobe" pid=16765 comm="apparmor_parser"
    [ 1854.777533] audit: type=1400 audit(1751368259.533:64): apparmor="STATUS" operation="profile_load" label="lxc-103_</var/lib/lxc>//&:lxc-103_<-var-lib-lxc>:unconfined" name="nvidia_modprobe" pid=16765 comm="apparmor_parser"
    [23665.538807] GPT: Use GNU Parted to correct GPT errors.
    [25679.568201] vfio-pci 0000:01:00.0: enabling device (0000 -> 0003)
The dmesg output looks exactly like the one provided previously. There are no new errors indicating a change in behavior for the GPU passthrough. journalctl also showed -- No entries -- again.

This still points to the NVIDIA Reset Bug as the most likely culprit.

I don't know how to proceed now :( ... sorry to bother you all, it's my first experience with Proxmox and homelab in general
 
Hi Finned,
I'm having same issue, trying to utilize old machine + GTX650Ti

lspci -nnk
03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106 [GeForce GTX 650 Ti] [10de:11c6] (rev a1)
Subsystem: ASUSTeK Computer Inc. GK106 [GeForce GTX 650 Ti] [1043:844d]
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
03:00.1 Audio device [0403]: NVIDIA Corporation GK106 HDMI Audio Controller [10de:0e0b] (rev a1)
Subsystem: ASUSTeK Computer Inc. GK106 HDMI Audio Controller [1043:844d]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel


dmesg | grep -iE 'vfio|iommu|error|gpu|nvidia|vga|03:00'
[ 0.213337] DMAR-IR: IOAPIC id 0 under DRHD base 0xfbffc000 IOMMU 0
[ 0.213338] DMAR-IR: IOAPIC id 2 under DRHD base 0xfbffc000 IOMMU 0
[ 0.443765] pci 0000:03:00.0: [10de:11c6] type 00 class 0x030000 PCIe Endpoint
[ 0.443793] pci 0000:03:00.0: BAR 0 [mem 0xfa000000-0xfaffffff]
[ 0.443796] pci 0000:03:00.0: BAR 1 [mem 0xe0000000-0xefffffff 64bit pref]
[ 0.443799] pci 0000:03:00.0: BAR 3 [mem 0xf0000000-0xf1ffffff 64bit pref]
[ 0.443801] pci 0000:03:00.0: BAR 5 [io 0xe000-0xe07f]
[ 0.443804] pci 0000:03:00.0: ROM [mem 0xfb000000-0xfb07ffff pref]
[ 0.443810] pci 0000:03:00.0: enabling Extended Tags
[ 0.443828] pci 0000:03:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[ 0.443892] pci 0000:03:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at 0000:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
[ 0.443947] pci 0000:03:00.1: [10de:0e0b] type 00 class 0x040300 PCIe Endpoint
[ 0.443974] pci 0000:03:00.1: BAR 0 [mem 0xfb080000-0xfb083fff]
[ 0.443984] pci 0000:03:00.1: enabling Extended Tags
[ 0.447781] acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
[ 0.447799] acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC]
[ 0.452313] iommu: Default domain type: Translated
[ 0.452313] iommu: DMA domain TLB invalidation policy: lazy mode
[ 0.459032] pci 0000:03:00.0: vgaarb: setting as boot VGA device
[ 0.459032] pci 0000:03:00.0: vgaarb: bridge control possible
[ 0.459032] pci 0000:03:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[ 0.459032] vgaarb: loaded
[ 0.489179] pci 0000:03:00.1: extending delay after power-on from D3hot to 20 msec
[ 0.489213] pci 0000:03:00.1: D0 power state depends on 0000:03:00.0
[ 0.489409] pci 0000:00:00.0: Adding to iommu group 0
[ 0.489430] pci 0000:00:01.0: Adding to iommu group 1
[ 0.489449] pci 0000:00:01.1: Adding to iommu group 2
[ 0.489472] pci 0000:00:02.0: Adding to iommu group 3
[ 0.489491] pci 0000:00:03.0: Adding to iommu group 4
[ 0.489507] pci 0000:00:05.0: Adding to iommu group 5
[ 0.489522] pci 0000:00:05.2: Adding to iommu group 6
[ 0.489539] pci 0000:00:05.4: Adding to iommu group 7
[ 0.489555] pci 0000:00:1a.0: Adding to iommu group 8
[ 0.489570] pci 0000:00:1b.0: Adding to iommu group 9
[ 0.489587] pci 0000:00:1c.0: Adding to iommu group 10
[ 0.489604] pci 0000:00:1c.2: Adding to iommu group 11
[ 0.489620] pci 0000:00:1c.4: Adding to iommu group 12
[ 0.489635] pci 0000:00:1d.0: Adding to iommu group 13
[ 0.489691] pci 0000:00:1f.0: Adding to iommu group 14
[ 0.489708] pci 0000:00:1f.2: Adding to iommu group 14
[ 0.489725] pci 0000:00:1f.3: Adding to iommu group 14
[ 0.489764] pci 0000:03:00.0: Adding to iommu group 15
[ 0.489786] pci 0000:03:00.1: Adding to iommu group 15
[ 0.489803] pci 0000:06:00.0: Adding to iommu group 16
[ 0.489819] pci 0000:07:00.0: Adding to iommu group 17
[ 0.489845] pci 0000:ff:08.0: Adding to iommu group 18
[ 0.489871] pci 0000:ff:09.0: Adding to iommu group 19
[ 0.489924] pci 0000:ff:0a.0: Adding to iommu group 20
[ 0.489943] pci 0000:ff:0a.1: Adding to iommu group 20
[ 0.489960] pci 0000:ff:0a.2: Adding to iommu group 20
[ 0.489978] pci 0000:ff:0a.3: Adding to iommu group 20
[ 0.490014] pci 0000:ff:0b.0: Adding to iommu group 21
[ 0.490032] pci 0000:ff:0b.3: Adding to iommu group 21
[ 0.490095] pci 0000:ff:0c.0: Adding to iommu group 22
[ 0.490114] pci 0000:ff:0c.1: Adding to iommu group 22
[ 0.490132] pci 0000:ff:0c.2: Adding to iommu group 22
[ 0.490153] pci 0000:ff:0c.3: Adding to iommu group 22
[ 0.490173] pci 0000:ff:0c.4: Adding to iommu group 22
[ 0.490237] pci 0000:ff:0d.0: Adding to iommu group 23
[ 0.490257] pci 0000:ff:0d.1: Adding to iommu group 23
[ 0.490276] pci 0000:ff:0d.2: Adding to iommu group 23
[ 0.490296] pci 0000:ff:0d.3: Adding to iommu group 23
[ 0.490315] pci 0000:ff:0d.4: Adding to iommu group 23
[ 0.490351] pci 0000:ff:0e.0: Adding to iommu group 24
[ 0.490371] pci 0000:ff:0e.1: Adding to iommu group 24
[ 0.490388] pci 0000:ff:0f.0: Adding to iommu group 25
[ 0.490404] pci 0000:ff:0f.1: Adding to iommu group 26
[ 0.490420] pci 0000:ff:0f.2: Adding to iommu group 27
[ 0.490436] pci 0000:ff:0f.3: Adding to iommu group 28
[ 0.490452] pci 0000:ff:0f.4: Adding to iommu group 29
[ 0.490470] pci 0000:ff:0f.5: Adding to iommu group 30
[ 0.490487] pci 0000:ff:10.0: Adding to iommu group 31
[ 0.490504] pci 0000:ff:10.1: Adding to iommu group 32
[ 0.490519] pci 0000:ff:10.2: Adding to iommu group 33
[ 0.490536] pci 0000:ff:10.3: Adding to iommu group 34
[ 0.490552] pci 0000:ff:10.4: Adding to iommu group 35
[ 0.490567] pci 0000:ff:10.5: Adding to iommu group 36
[ 0.490584] pci 0000:ff:10.6: Adding to iommu group 37
[ 0.490600] pci 0000:ff:10.7: Adding to iommu group 38
[ 0.490654] pci 0000:ff:13.0: Adding to iommu group 39
[ 0.490684] pci 0000:ff:13.1: Adding to iommu group 39
[ 0.490707] pci 0000:ff:13.4: Adding to iommu group 39
[ 0.490731] pci 0000:ff:13.5: Adding to iommu group 39
[ 0.490775] pci 0000:ff:16.0: Adding to iommu group 40
[ 0.490799] pci 0000:ff:16.1: Adding to iommu group 40
[ 0.490822] pci 0000:ff:16.2: Adding to iommu group 40
[ 0.536389] ERST: Error Record Serialization Table (ERST) support is initialized.
[ 0.994406] RAS: Correctable Errors collector initialized.
[ 4.699079] nvidiafb: Device ID: 10de11c6
[ 4.699085] nvidiafb: unknown NV_ARCH
[ 4.723089] snd_hda_intel 0000:03:00.1: Disabling MSI
[ 4.723097] snd_hda_intel 0000:03:00.1: Handle vga_switcheroo audio client
[ 4.760479] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:02.0/0000:03:00.1/sound/card1/input6
[ 4.760605] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:02.0/0000:03:00.1/sound/card1/input7
[ 4.761166] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:02.0/0000:03:00.1/sound/card1/input8
[ 4.761282] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:02.0/0000:03:00.1/sound/card1/input9
[ 6.099045] audit: type=1400 audit(1752250574.029:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=1261 comm="apparmor_parser"
[ 6.099051] audit: type=1400 audit(1752250574.029:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe//kmod" pid=1261 comm="apparmor_parser"
[ 28.479306] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ 198.025737] VFIO - User Level meta-driver version: 0.3
[ 198.049024] vfio-pci 0000:03:00.0: vgaarb: deactivate vga console
[ 198.049029] vfio-pci 0000:03:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[ 198.049734] vfio-pci 0000:03:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[ 198.050050] vfio-pci 0000:03:00.0: vgaarb: deactivate vga console
[ 198.050053] vfio-pci 0000:03:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[ 203.588933] vfio-pci 0000:03:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=io+mem
root@pve:

I'm no expert, just start fallin in love with proxmox :D

The Win11 can boot, nvidia driver can be installed normally, but using techpowerup GPU the GPU Memory shown 0,
And the driver cannot start (error 43).
Did i need to install something from the virtio cd ?
 
Last edited: