Need help with GPU passthrough in Proxmox: NVIDIA RTX 3060 not working properly

MadCowDzz

New Member
Apr 26, 2023
12
1
1
I've been trying for weeks to get GPU passthrough to work in a Windows 10 guest vm in Proxmox. I'm having trouble troubleshooting, or even understanding what I'm doing, since I feel like I've been following multiple guides at once and implementing any/all advice I find on forums.
A few weeks ago, things had worked for a couple days, but when the VM would restart the GPU wouldn't work. I would have to restart the entire Proxmox instance, which was less than ideal. Since that time I've followed more guides and ultimately broke things again.

I'm now at a state where I can access the VM using the Proxmox console (which is strange according to guides). RDP crashes when I don't have the dummy HDMI plugged in, yet when the dummy HDMI isn't plugged in, Device Manager showed my GPU name. With the dummy thing plugged in, Device Manager just shows Microsoft Basic/Remote Display Adapters.

My current hardware:
  • ASRock Rack X470D4U
  • AMD Ryzen 7 5800X w/ 64GB RAM
  • Gigabyte NVIDIA RTX 3060 12G
I'm running Proxmox 7.4-3

Unfortunately I don't know what to specifically provide, so here's a bunch of things I've seen other posts provide. Please help, I know this can work but I'm stuck

root@pve:~# qm config 103

Code:
    agent: 1
    balloon: 0
    bios: ovmf
    boot: order=ide0;ide2;net0;ide1
    cores: 4
    cpu: host,hidden=1
    efidisk0: local-lvm:vm-103-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
    hostpci0: 0000:2b:00,pcie=1,x-vga=1
    ide0: local-lvm:vm-103-disk-1,discard=on,size=80G,ssd=1
    ide2: local:iso/virtio-win-0.1.229.iso,media=cdrom,size=522284K
    machine: q35
    memory: 16384
    meta: creation-qemu=7.2.0,ctime=1682475863
    name: windows
    net0: e1000=22:3B:6C:EC:22:C2,bridge=vmbr0,firewall=1
    numa: 0
    ostype: win11
    scsihw: virtio-scsi-single
    smbios1: uuid=6e9608ff-a67c-4632-bb36-b5ea72c4a88f
    sockets: 1
    tpmstate0: local-lvm:vm-103-disk-2,size=4M,version=v2.0
    vga: virtio
    vmgenid: b91c1297-7235-4218-bee4-9a24c8deb565

## root@pve:~# cat /proc/cmdline

Code:
    BOOT_IMAGE=/boot/vmlinuz-5.15.107-2-pve root=/dev/mapper/pve-root ro quiet amd_iommu=on iommu=pt nomodeset textonly pci=realloc video=efifb:off video=simplefb:off

## root@pve:~# lspci

Code:
    [..]
    2b:00.0 VGA compatible controller: NVIDIA Corporation GA106 [GeForce RTX 3060 Lite Hash Rate] (rev ff)
    2b:00.1 Audio device: NVIDIA Corporation GA106 High Definition Audio Controller (rev ff)
    [..]

root@pve:~# cat /etc/default/grub

Code:
    [..]
    GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt nomodeset textonly pci=realloc video=efifb:off video=simplefb:off"
    GRUB_CMDLINE_LINUX=""
    [..]

root@pve:~# cat /etc/modprobe.d/vfio.conf

Code:
    options vfio-pci ids=10de:2504,10de:228e disable_vga=1

root@pve:/etc/modprobe.d# cat /etc/modprobe.d/blacklist.conf

Code:
    blacklist nvidiafb
    blacklist nvidia
    blacklist radeon
    blacklist nouveau

root@pve:~# dmesg | grep -e IOMMU -e iommu -e vfio -e Intel -e bug -e Bug -e BAR

Code:
    [ 1207.714164] vfio-pci 0000:2b:00.0: BAR 1: can't reserve [mem 0xd0000000-0xdfffffff 64bit pref]

root@pve:~# find /sys/kernel/iommu_groups/ -type l | grep 2b

Code:
    /sys/kernel/iommu_groups/14/devices/0000:2b:00.1
    /sys/kernel/iommu_groups/14/devices/0000:2b:00.0
 
## root@pve:~# cat /proc/cmdline

BOOT_IMAGE=/boot/vmlinuz-5.15.107-2-pve root=/dev/mapper/pve-root ro quiet amd_iommu=on iommu=pt nomodeset textonly pci=realloc video=efifb:off video=simplefb:off
amd_iommu=on does nothing because it is on by default. iommu=pt also usually does nothing.
nomodeset textonly video=efifb:off video=simplefb:off don't work with recent Proxmox versions. If you do passthrough of the boot (or single) GPU, you need this work-around. Because it fixes this error (as shown in that thread):
[ 1207.714164] vfio-pci 0000:2b:00.0: BAR 1: can't reserve [mem 0xd0000000-0xdfffffff 64bit pref]
 
Last edited:
Thanks for this, I'm not seeing any errors in dmesg now and the GPU is showing up in Display Adapters, however it has the dreaded Code 43. I'm also able to use the Proxmox Console in addition to RDP (i'm under the impression that Proxmox Console access is supposed to go away when passing through a GPU)
 
I'm also able to use the Proxmox Console in addition to RDP (i'm under the impression that Proxmox Console access is supposed to go away when passing through a GPU)
Set Display to None (vga: none) to remove the virtual display of the VM.
If your have a Proxmox host console while the VM is running with passthrough on the same RTX3060 then something is very wrong.
Thanks for this, I'm not seeing any errors in dmesg now and the GPU is showing up in Display Adapters, however it has the dreaded Code 43.
Make sure to early bind the GPU to vfio-pci to make sure nothing touches it before starting the VM, otherwise it might not work well in the VM (but this is also what makes you lose the Proxmox host console and you won't see any Proxmox boot messages).
 
Yes, I knew the display of the vm within proxmox was very weird. I added the vga:none line to my vm's config and now I can not see the display in proxmox console.

I'm seeing this in dmesg, is it a good message, or a bad message?
[ 143.737028] vfio-pci 0000:2b:00.0: vfio_bar_restore: reset recovery - restoring BARs

Regarding the early binding, I already had the configuration as follows:

root@pve:/etc/modprobe.d# lspci -nn | grep -i nvidia

Code:
2b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060 Lite Hash Rate] [10de:2504] (rev a1)
2b:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1)

root@pve:/etc/modprobe.d# cat /etc/modprobe.d/vfio.conf

Code:
options vfio-pci ids=10de:2504,10de:228e disable_vga=1

I've confirmed it with:

root@pve:/etc/modprobe.d# lspci -nnk

Code:
[..]
2b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060 Lite Hash Rate] [10de:2504] (rev a1)
    Subsystem: Gigabyte Technology Co., Ltd Device [1458:4074]
    Kernel driver in use: vfio-pci
    Kernel modules: nvidiafb, nouveau
2b:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1)
    Subsystem: Gigabyte Technology Co., Ltd Device [1458:4074]
    Kernel driver in use: vfio-pci
    Kernel modules: snd_hda_intel
[..]

root@pve:~# dmesg | grep -e IOMMU -e iommu -e Intel -e bug -e Bug -e BAR

Code:
[    0.000000]   Intel GenuineIntel
[    0.354308] ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
[    0.360321] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.373941] pci 0000:2b:00.0: BAR 1: assigned to efifb
[    0.375947] iommu: Default domain type: Translated 
[    0.375947] iommu: DMA domain TLB invalidation policy: lazy mode 
[    0.395165] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.395186] pci 0000:00:01.0: Adding to iommu group 0
[    0.395194] pci 0000:00:01.3: Adding to iommu group 1
[    0.395200] pci 0000:00:02.0: Adding to iommu group 2
[    0.395208] pci 0000:00:03.0: Adding to iommu group 3
[    0.395215] pci 0000:00:03.1: Adding to iommu group 4
[    0.395220] pci 0000:00:04.0: Adding to iommu group 5
[    0.395226] pci 0000:00:05.0: Adding to iommu group 6
[    0.395234] pci 0000:00:07.0: Adding to iommu group 7
[    0.395241] pci 0000:00:07.1: Adding to iommu group 8
[    0.395249] pci 0000:00:08.0: Adding to iommu group 9
[    0.395256] pci 0000:00:08.1: Adding to iommu group 10
[    0.395263] pci 0000:00:14.0: Adding to iommu group 11
[    0.395267] pci 0000:00:14.3: Adding to iommu group 11
[    0.395284] pci 0000:00:18.0: Adding to iommu group 12
[    0.395287] pci 0000:00:18.1: Adding to iommu group 12
[    0.395291] pci 0000:00:18.2: Adding to iommu group 12
[    0.395295] pci 0000:00:18.3: Adding to iommu group 12
[    0.395299] pci 0000:00:18.4: Adding to iommu group 12
[    0.395303] pci 0000:00:18.5: Adding to iommu group 12
[    0.395307] pci 0000:00:18.6: Adding to iommu group 12
[    0.395311] pci 0000:00:18.7: Adding to iommu group 12
[    0.395329] pci 0000:03:00.0: Adding to iommu group 13
[    0.395341] pci 0000:03:00.1: Adding to iommu group 13
[    0.395354] pci 0000:03:00.2: Adding to iommu group 13
[    0.395357] pci 0000:20:00.0: Adding to iommu group 13
[    0.395359] pci 0000:20:01.0: Adding to iommu group 13
[    0.395362] pci 0000:20:02.0: Adding to iommu group 13
[    0.395364] pci 0000:20:03.0: Adding to iommu group 13
[    0.395367] pci 0000:20:04.0: Adding to iommu group 13
[    0.395370] pci 0000:20:08.0: Adding to iommu group 13
[    0.395375] pci 0000:21:00.0: Adding to iommu group 13
[    0.395377] pci 0000:22:00.0: Adding to iommu group 13
[    0.395382] pci 0000:23:00.0: Adding to iommu group 13
[    0.395387] pci 0000:24:00.0: Adding to iommu group 13
[    0.395393] pci 0000:25:00.0: Adding to iommu group 13
[    0.395398] pci 0000:26:00.0: Adding to iommu group 13
[    0.395403] pci 0000:2a:00.0: Adding to iommu group 13
[    0.395421] pci 0000:2b:00.0: Adding to iommu group 14
[    0.395437] pci 0000:2b:00.1: Adding to iommu group 14
[    0.395444] pci 0000:2c:00.0: Adding to iommu group 15
[    0.395454] pci 0000:2d:00.0: Adding to iommu group 16
[    0.395464] pci 0000:2d:00.1: Adding to iommu group 17
[    0.395473] pci 0000:2d:00.3: Adding to iommu group 18
[    0.395482] pci 0000:2d:00.4: Adding to iommu group 19
[    0.397759] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.397961] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[    0.870216] igb: Intel(R) Gigabit Ethernet Network Driver
[    0.870217] igb: Copyright (c) 2007-2014 Intel Corporation.
[    0.899886] igb 0000:23:00.0: Intel(R) Gigabit Ethernet Network Connection
[    0.931768] igb 0000:24:00.0: Intel(R) Gigabit Ethernet Network Connection
[    7.806655] systemd[1]: Mounting Kernel Debug File System...
[    7.812268] systemd[1]: Mounted Kernel Debug File System.
[    7.888772] Disabling lock debugging due to kernel taint
[  143.737028] vfio-pci 0000:2b:00.0: vfio_bar_restore: reset recovery - restoring BARs
 
When I boot into the Windows VM, I see my GPU in Device Manager, however the properties window has the Error code 43 in it. I've tried installing drivers, but the VM seems to crash (kicks me out and I lose the Qemu stuff) I feel so close to a functioning passthrough. :/

Any suggestions how to troubleshoot this?
 
I don't understand what's happened. I tried to install the drivers, the VM kicked me out and I guess rebooted, now the windows doesn't show my GPU in display adapters anymore. Now it shows as a SCSI controler, with an exclamation point, and says those drivers are not installed with Error 28.

Why is my GPU suddenly displaying as the incorrect device? I haven't changed anything on the Proxmox side.
 
Windows is back to displaying my RTX 3060 in Device Manager, but with the Error 43 code in Device Manager. When I install drivers, the VM crashes.

Here is a log from dmesg, can anyone help me understand what might be going on?
I annotated the below with when I turned on the VM. Some of the log likely includes me performing a "qm stop <vmid>" and then trying to start it agian. It's not clear to me at which point those log statemetns would be.

Code:
root@pve:~# dmesg | grep -e IOMMU -e iommu -e vfio -e Intel -e bug -e Bug -e BAR
[    0.000000]   Intel GenuineIntel
[    0.354485] ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
[    0.360460] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.373911] pci 0000:2b:00.0: BAR 1: assigned to efifb
[    0.375889] iommu: Default domain type: Translated 
[    0.375889] iommu: DMA domain TLB invalidation policy: lazy mode 
[    0.395360] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.395380] pci 0000:00:01.0: Adding to iommu group 0
[    0.395389] pci 0000:00:01.3: Adding to iommu group 1
[    0.395394] pci 0000:00:02.0: Adding to iommu group 2
[    0.395403] pci 0000:00:03.0: Adding to iommu group 3
[    0.395409] pci 0000:00:03.1: Adding to iommu group 4
[    0.395415] pci 0000:00:04.0: Adding to iommu group 5
[    0.395420] pci 0000:00:05.0: Adding to iommu group 6
[    0.395428] pci 0000:00:07.0: Adding to iommu group 7
[    0.395435] pci 0000:00:07.1: Adding to iommu group 8
[    0.395443] pci 0000:00:08.0: Adding to iommu group 9
[    0.395450] pci 0000:00:08.1: Adding to iommu group 10
[    0.395457] pci 0000:00:14.0: Adding to iommu group 11
[    0.395461] pci 0000:00:14.3: Adding to iommu group 11
[    0.395476] pci 0000:00:18.0: Adding to iommu group 12
[    0.395480] pci 0000:00:18.1: Adding to iommu group 12
[    0.395484] pci 0000:00:18.2: Adding to iommu group 12
[    0.395488] pci 0000:00:18.3: Adding to iommu group 12
[    0.395493] pci 0000:00:18.4: Adding to iommu group 12
[    0.395496] pci 0000:00:18.5: Adding to iommu group 12
[    0.395500] pci 0000:00:18.6: Adding to iommu group 12
[    0.395504] pci 0000:00:18.7: Adding to iommu group 12
[    0.395522] pci 0000:03:00.0: Adding to iommu group 13
[    0.395534] pci 0000:03:00.1: Adding to iommu group 13
[    0.395547] pci 0000:03:00.2: Adding to iommu group 13
[    0.395549] pci 0000:20:00.0: Adding to iommu group 13
[    0.395552] pci 0000:20:01.0: Adding to iommu group 13
[    0.395555] pci 0000:20:02.0: Adding to iommu group 13
[    0.395557] pci 0000:20:03.0: Adding to iommu group 13
[    0.395560] pci 0000:20:04.0: Adding to iommu group 13
[    0.395563] pci 0000:20:08.0: Adding to iommu group 13
[    0.395568] pci 0000:21:00.0: Adding to iommu group 13
[    0.395570] pci 0000:22:00.0: Adding to iommu group 13
[    0.395575] pci 0000:23:00.0: Adding to iommu group 13
[    0.395580] pci 0000:24:00.0: Adding to iommu group 13
[    0.395585] pci 0000:25:00.0: Adding to iommu group 13
[    0.395590] pci 0000:26:00.0: Adding to iommu group 13
[    0.395596] pci 0000:2a:00.0: Adding to iommu group 13
[    0.395614] pci 0000:2b:00.0: Adding to iommu group 14
[    0.395630] pci 0000:2b:00.1: Adding to iommu group 14
[    0.395636] pci 0000:2c:00.0: Adding to iommu group 15
[    0.395647] pci 0000:2d:00.0: Adding to iommu group 16
[    0.395657] pci 0000:2d:00.1: Adding to iommu group 17
[    0.395666] pci 0000:2d:00.3: Adding to iommu group 18
[    0.395676] pci 0000:2d:00.4: Adding to iommu group 19
[    0.397957] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.398173] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[    0.865438] igb: Intel(R) Gigabit Ethernet Network Driver
[    0.865439] igb: Copyright (c) 2007-2014 Intel Corporation.
[    0.894278] igb 0000:23:00.0: Intel(R) Gigabit Ethernet Network Connection
[    0.926107] igb 0000:24:00.0: Intel(R) Gigabit Ethernet Network Connection
[    7.062715] systemd[1]: Mounting Kernel Debug File System...
[    7.068506] systemd[1]: Mounted Kernel Debug File System.
[    7.073471] vfio-pci 0000:2b:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
[    7.090614] vfio_pci: add [10de:2504[ffffffff:ffffffff]] class 0x000000/00000000
[    7.110579] vfio_pci: add [10de:228e[ffffffff:ffffffff]] class 0x000000/00000000
[    7.200463] Disabling lock debugging due to kernel taint
>>>>>> TURN ON VM <<<<<<<<
[  175.071087] vfio-pci 0000:2b:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
[  175.071110] vfio-pci 0000:2b:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
[  175.071118] vfio-pci 0000:2b:00.0: vfio_ecap_init: hiding ecap 0x26@0xc1c
[  175.071119] vfio-pci 0000:2b:00.0: vfio_ecap_init: hiding ecap 0x27@0xd00
[  175.071120] vfio-pci 0000:2b:00.0: vfio_ecap_init: hiding ecap 0x25@0xe00
[  175.090915] vfio-pci 0000:2b:00.1: enabling device (0000 -> 0002)
[  175.091027] vfio-pci 0000:2b:00.1: vfio_ecap_init: hiding ecap 0x25@0x160
[  189.695874] vfio-pci 0000:2b:00.0: vfio_bar_restore: reset recovery - restoring BARs
[  636.012484] vfio-pci 0000:2b:00.1: vfio_bar_restore: reset recovery - restoring BARs
[  638.224453] vfio-pci 0000:2b:00.1: vfio_bar_restore: reset recovery - restoring BARs
[  638.256759] vfio-pci 0000:2b:00.0: vfio_bar_restore: reset recovery - restoring BARs
[  639.000615] vfio-pci 0000:2b:00.0: timed out waiting for pending transaction; performing function level reset anyway
[  640.248605] vfio-pci 0000:2b:00.0: not ready 1023ms after FLR; waiting
[  641.304588] vfio-pci 0000:2b:00.0: not ready 2047ms after FLR; waiting
[  643.416310] vfio-pci 0000:2b:00.0: not ready 4095ms after FLR; waiting
[  647.768467] vfio-pci 0000:2b:00.0: not ready 8191ms after FLR; waiting
[  656.216316] vfio-pci 0000:2b:00.0: not ready 16383ms after FLR; waiting
[  674.391910] vfio-pci 0000:2b:00.0: not ready 32767ms after FLR; waiting
[  709.207269] vfio-pci 0000:2b:00.0: not ready 65535ms after FLR; giving up
[  709.591096] vfio-pci 0000:2b:00.1: can't change power state from D0 to D3hot (config space inaccessible)
[  710.327177] vfio-pci 0000:2b:00.0: timed out waiting for pending transaction; performing function level reset anyway
[  711.575236] vfio-pci 0000:2b:00.0: not ready 1023ms after FLR; waiting
[  712.631204] vfio-pci 0000:2b:00.0: not ready 2047ms after FLR; waiting
[  714.839204] vfio-pci 0000:2b:00.0: not ready 4095ms after FLR; waiting
[  719.191097] vfio-pci 0000:2b:00.0: not ready 8191ms after FLR; waiting
[  727.638682] vfio-pci 0000:2b:00.0: not ready 16383ms after FLR; waiting
[  746.070603] vfio-pci 0000:2b:00.0: not ready 32767ms after FLR; waiting
[  780.885877] vfio-pci 0000:2b:00.0: not ready 65535ms after FLR; giving up
[  780.887462] vfio-pci 0000:2b:00.0: can't change power state from D0 to D3hot (config space inaccessible)
[  781.621879] vfio-pci 0000:2b:00.0: timed out waiting for pending transaction; performing function level reset anyway
[  782.869793] vfio-pci 0000:2b:00.0: not ready 1023ms after FLR; waiting
[  783.925825] vfio-pci 0000:2b:00.0: not ready 2047ms after FLR; waiting
[  786.005886] vfio-pci 0000:2b:00.0: not ready 4095ms after FLR; waiting
[  790.357706] vfio-pci 0000:2b:00.0: not ready 8191ms after FLR; waiting
[  798.805544] vfio-pci 0000:2b:00.0: not ready 16383ms after FLR; waiting
[  815.701222] vfio-pci 0000:2b:00.0: not ready 32767ms after FLR; waiting
[  850.516596] vfio-pci 0000:2b:00.0: not ready 65535ms after FLR; giving up
[  852.322281] vfio-pci 0000:2b:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  852.322683] vfio-pci 0000:2b:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  853.044546] vfio-pci 0000:2b:00.0: timed out waiting for pending transaction; performing function level reset anyway
[  854.292639] vfio-pci 0000:2b:00.0: not ready 1023ms after FLR; waiting
[  855.348371] vfio-pci 0000:2b:00.0: not ready 2047ms after FLR; waiting
[  857.428552] vfio-pci 0000:2b:00.0: not ready 4095ms after FLR; waiting
[  861.780391] vfio-pci 0000:2b:00.0: not ready 8191ms after FLR; waiting
[  870.228036] vfio-pci 0000:2b:00.0: not ready 16383ms after FLR; waiting
[  887.379786] vfio-pci 0000:2b:00.0: not ready 32767ms after FLR; waiting
 
[ 639.000615] vfio-pci 0000:2b:00.0: timed out waiting for pending transaction; performing function level reset anyway
[ 640.248605] vfio-pci 0000:2b:00.0: not ready 1023ms after FLR; waiting
[ 641.304588] vfio-pci 0000:2b:00.0: not ready 2047ms after FLR; waiting
[ 643.416310] vfio-pci 0000:2b:00.0: not ready 4095ms after FLR; waiting
[ 647.768467] vfio-pci 0000:2b:00.0: not ready 8191ms after FLR; waiting
[ 656.216316] vfio-pci 0000:2b:00.0: not ready 16383ms after FLR; waiting
[ 674.391910] vfio-pci 0000:2b:00.0: not ready 32767ms after FLR; waiting
[ 709.207269] vfio-pci 0000:2b:00.0: not ready 65535ms after FLR; giving up
[ 709.591096] vfio-pci 0000:2b:00.1: can't change power state from D0 to D3hot (config space inaccessible
Looks like your GPU does not properly reset. Maybe it works (maybe only oncde) if you boot your system using another GPU? Be aware that adding/removing PCI(e) devices can change PCI IDs of other devices. Maybe this particular GPU does not work with passthrough or maybe it needs some special reset procedure (like vendor-reset for AMD). Dropping the GPU from the PCI bus to reset it, that appears to work for NVidia sometimes

EDIT: Some AM4 motherboard BIOS versions did break passthrough in a way that would give the same logs. Maybe update to the latest BIOS version? However, sometimes the latest was broken and another version (or beta version) would work better. You would need a PCIe device that is know to work with passthrough to make sure it's not the motherboard BIOS.
 
Last edited:
If you read and apply different 'tuto you find online' as you mention, just follow them properly as different version have different setting, but not too much change. Basically: disconnect your modem, install prox 7.1-2, add 8-9 setting, boot xubuntu live iso and pass do work fine.
 
1691009574897.png
I have a 3060 12GB and was also having issues getting GPU passthrough working without having crashes that would take down my entire node. A somewhat of a solution for me was to disable "All Functions." This allows me to use my GPU for stable diffusion purposes, probably wouldn't be the greatest solution for a game server as I don't believe the audio will be passed through.
 
I had a similar issue, my 3060 showed up in Device Manager but with code 43. My issue was that I had forgotten to check "PCI-Express" in the Proxmox mapping! Just posting this in case others have the same issue :)
 
I had a similar issue, my 3060 showed up in Device Manager but with code 43. My issue was that I had forgotten to check "PCI-Express" in the Proxmox mapping! Just posting this in case others have the same issue :)
You're a legend! I've been tinkering with my box half the day because I added a couple Tesla P40's and had to change the slot my 3060 was in. This is literally all it was the whole time.
 
  • Like
Reactions: an0ndev

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!