[SOLVED] GPU (Nvidia RTX3050) PCIe passthrough error - unable to start VM

Parlun

New Member
Jun 8, 2021
11
2
3
40
Hi Proxmox community!

I´m having trouble trying to passthrough the use of a GPU to a Win10 VM. I´ve followed several guides, some posted on this forum and what is posted on Proxmox VE wiki along with in more general terms of how passthrough is performed from host to VM.

I´m running Proxmox VE 7.2 (pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.39-3-pve))
Attached are outputs from what some guides state to be of importance:

Code:
root@pveEpyc2022:~# lspci -v | grep -i nvidia
21:00.0 VGA compatible controller: NVIDIA Corporation Device 2507 (rev a1) (prog-if 00 [VGA controller])
        Kernel modules: nvidiafb, nouveau
21:00.1 Audio device: NVIDIA Corporation Device 228e (rev a1)
Code:
root@pveEpyc2022:~# dmesg 
...
[ 1774.787088] vfio-pci 0000:21:00.0: BAR 1: can't reserve [mem 0x38060000000-0x3806fffffff 64bit pref]
Code:
root@pveEpyc2022:~# dmesg | grep -e DMAR -e IOMMU
root@pveEpyc2022:~# dmesg | grep -e DMAR
Following output listed above from the "dmesg", I seem not be able to reserve memory of the GPU(?). In addition since the output of "dmesg | grep -e DMAR -e IOMMU" is empty, I´m guessing that it is because some time has passed since previous reboot of host.

I´ve blacklisted the different drivers for the GPU, i.e. nvidiafb, nvidia and nouveau.
Code:
root@pveEpyc2022:~# cat /etc/modprobe.d/blacklist.conf
blacklist nouveau
blacklist nvidiafb
blacklist nvidia
blacklist nvidia_drm
blacklist radeon
Following a guide at this forum there should also be a "disable_vga=1" in the vfio.conf-file:
Code:
root@pveEpyc2022:~# cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:2507,10de:228e disable_vga=1
The ids are retrieved from the "lscpi" command:
Code:
 root@pveEpyc2022:~# lspci -n | grep -i 21:00
21:00.0 0300: 10de:2507 (rev a1)
21:00.1 0403: 10de:228e (rev a1)
Code:
root@pveEpyc2022:~# cat /etc/modprobe.d/nvidia.conf
softdep nvidiafb pre: vfio-pci
I'm unsure of why the "softdep" is neccessary nor what it is aiming at.
The config-file of the VM that should make use of the GPU is as follows:
Code:
 root@pveEpyc2022:~# cat /etc/pve/qemu-server/107.conf
bios: ovmf
boot: order=ide0;ide2;net0
cores: 64
cpu: host
efidisk0: ISOs:107/vm-107-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
hostpci0: 0000:21:00,pcie=1,x-vga=1
ide0: ISOs:107/vm-107-disk-1.qcow2,size=200G,ssd=1
ide2: ISOs:iso/virtio-win-0.1.208.iso,media=cdrom,size=543390K
machine: pc-q35-6.2
memory: 16384
meta: creation-qemu=6.2.0,ctime=1668161296
name: win107
net0: e1000=4E:8B:E0:09:8A:94,bridge=vmbr0,firewall=1
net1: e1000=1A:58:02:DF:3F:EE,bridge=vmbr2,firewall=1
numa: 0
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=919db345-9427-45ae-b838-50b124ed6215
sockets: 2
vmgenid: 96e25596-8d6e-4139-9ef6-becd18eea743

Prior to adding the GPU, the VM would start and one could access it via RDP or the VNC.
I´ve previously successfully, after following the same guides listed above, added the capability of a GPU passthrough for a different GPU and host. I don´t get why this doesn´t work. I believe that I have taken all of the necessary steps to make the passthrough valid.
Is there something with the GPU, the Nvidia RTX3050, that makes it less appropriate to use as a passthrough device?
Any ideas on why this doesn't work?

Please don´t hesitate to request more logs or data of me in order to solve this issue.
 
Thank you!
I read that long thread and the BAR 1 error is reduced to that according to the dmesg-output the BAR 1 is assigned to efifb.
I added the initcall_blacklist=sysfb_int in the "/etc/default/grub"-file.
I can now start the VM but the GPU will not show up for VM.
How can I check that the pass-through is attained by the VM?
 
I can now start the VM but the GPU will not show up for VM.
How can I check that the pass-through is attained by the VM?
On Linux, you'ld use lspci but I don't know what to do if it does not show up in Windows Device Manager. Do you see output on a physical display connected to the GPU?
 
On Linux, you'ld use lspci but I don't know what to do if it does not show up in Windows Device Manager. Do you see output on a physical display connected to the GPU?
After a reboot it now shows as a display adapter.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!