Passthrough Nvidia GTX 1650 works with PVE7.0 doesn't work with PVE8.3

rjcab

Active Member
Mar 1, 2021
76
1
28
45
Hello,

I have a PC with a Nvidia GTX 1650 PCIe. with Proxmox 7.0 I did the following configuration to passthrough the GPU to my VM Windows 10

Code:
/etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt
video=vesafb:off video=efifb:off"

/etc/modules:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

then update-grub

/etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:1f82,10de:10fa disable vga=1

then update-initramfs -u

/etc/modprobe.d/blacklist.conf
blacklist nvidia
blacklist nouveau
blacklist radeon
blacklist i2c_nvidia_gpu
blacklist nvidiafb

then reboot

I install a fresh install of proxmox 8.3 and I did the previous conf and restore my VM Windows from the Proxmox V 7.0:

1742028520605.png
The only difference in the VM conf is, I used ressources mapping :

1742028648206.png

The VM tried to start, but stops and I get the following error:

1742028707984.png

Frankly, I don't know why, if you have any ideas ?

Thanks
 
Revert those manual changes or start a fresh installation. I made the observation that you don't need those on 8.3 (with nvidia cards).
Just create VM, add PCIe-device like this and start VM:
Bildschirmfoto zu 2025-03-15 16-01-07.png

Edit: Oh and...I don't use mappings at all.
 
Last edited:
  • Like
Reactions: leesteken
Thanks for your reply. I follow your advice but same error. I don't understand why it doesn't work with 8.3 meantime it works with 7.0 with the same conf.
 
Strange, the difference I see is machine type. I have Q35 instead of i440, but switching that with an already existing Windows-Installation won't boot because of it.

Could you try to create a new VM with Q35 machine type, passthrough GPU and boot windows.iso just to quickcheck if that works?
If that gives the same error message, your motherboard might be doing something strange with the IOMMU groups.
Special notes about that: https://pve.proxmox.com/wiki/PCI(e)_Passthrough

PCI(e) slots
Some platforms handle their physical PCI(e) slots differently. So, sometimes it can help to put the card in a another PCI(e) slot, if you do not get the desired IOMMU group separation.
Unsafe interrupts
For some platforms, it may be necessary to allow unsafe interrupts. For this add the following line in a file ending with ‘.conf’ file in /etc/modprobe.d/:
options vfio_iommu_type1 allow_unsafe_interrupts=1
Please be aware that this option can make your system unstable.
 
Just to make the thing clearer:

With Proxmox, the below VM works well with the gpu passthrough for the Geforce GTX 1600
If I make a fresh install of proxmox 8.3, I do the config in the first post and I restore the VM screen it doesn't work.

1742226651825.png
 
I made another test.
A fresh install of proxmox 8.3 and a fresh install of WIN10 VM.
1742287808688.png

I did all conf as below:

/etc/default/grub:<br>GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt<br>video=vesafb:off video=efifb:off"<br><br>/etc/modules:<br>vfio<br>vfio_iommu_type1<br>vfio_pci<br>vfio_virqfd<br><br>then update-grub<br><br>/etc/modprobe.d/vfio.conf<br>options vfio-pci ids=10de:1f82,10de:10fa disable vga=1<br><br>then update-initramfs -u<br><br>/etc/modprobe.d/blacklist.conf<br>blacklist nvidia<br>blacklist nouveau<br>blacklist radeon<br>blacklist i2c_nvidia_gpu<br>blacklist nvidiafb<br><br>then reboot

Then I add the PCIe :

1742288004650.png

I start the VM to install Nvidia drivers, but at this time proxmox craches, I lose WEB GUI et SSH connection.

:(
 
Just seeing: Try only 1 socket with Windows-VMs, this alone is a problem with Windows.
Also Win10/Win11 prefers UEFI-Bios and not "IDE"-disk, choose SATA.
 
Last edited:
Thanks for the tips. In my current config the VM runs well if I don't add the PCIe device for graphic card. :(
 
Well no success so far, so I installed a fresh install of proxmox 7.4, then:

1742411350244.png

update-grub
update-initramfs -u -k all
reboot


1742411406361.png

then

root@pvetest:~# dmesg | grep -e DMAR -e IOMMU
root@pvetest:~#


and there is an issue as no output

everything is well configured in the bios as it works with proxmox 7.0

I tried to add and reboot but same:

add intel_iommu=on to /etc/kernel/cmdline
 
Independantly of how you boot the VMs, but how does proxmox boot? UEFI or CSM?