[SOLVED] Regarding the conflict between vgpu and pci devices


New Member
Nov 29, 2023
Hello everyone, I encountered a problem about vgpu.
I successfully installed the vgpu driver according to the documentation and assigned the vgpu to a virtual machine.
However, I found that I had another virtual machine that conflicted with vgpu. I passed through a network card and array card in the other virtual machine, which caused the nvidia driver to fail.
When I power on the virtual machine that has vgpu, the other virtual machine won't power on. After starting another virtual machine, an error occurs in the nvidia driver.
After my search in the forum I found
This post describes a situation regarding vifo-pci conflict with vgpu, but this should only be limited to the same gpu, not other pci devices.
If you have troubleshooting ideas or need me to provide more information, please reply.
Thank you so much!
My fault, the following is the version information.
proxmox 8.1, Nvidia Driver Version: 535.129.03, uname -r:6.5.11-5-pve
In the end, I found the problem because these PCIe devices were all in the same iommu group, which caused this problem.

I separated them and solved the problem.

The method of segmentation is not quite the same as the one found in the document, so you can refer to each other.

vim /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction"
Last edited:


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!