[SOLVED] Adding PCI Express Root bus device to the config file

Sandbo

Well-Known Member
Jul 4, 2019
85
10
48
34
Hello,

Sorry if this question is very simple, but I have no idea about where to apply the said device in the title, that is from this link:
https://forum.level1techs.com/t/increasing-vfio-vga-performance/133443
As an attempt to fix the AMD Reset Bug

I knew that the config files of the VM sit in the folder /etc/pve/qemu-server
However, they do not look like the qemu file the forums people were looking at:
https://forum.level1techs.com/t/increasing-vfio-vga-performance/133443/20?u=sandbo

May I know how the same change maybe applied?

(solution: please see post # 2 for reply from Stefan)
 
Last edited:
We already do all of what that thread says automatically :) The root device is created by PVE, and the bandwidth settings are detected automatically in newer versions of QEMU.

So no changes to the config necessary with regards to the AMD reset fix, but you will have to compile your own kernel (probably) to include the fixes. Check our git for more information.
 
We already do all of what that thread says automatically :) The root device is created by PVE, and the bandwidth settings are detected automatically in newer versions of QEMU.

So no changes to the config necessary with regards to the AMD reset fix, but you will have to compile your own kernel (probably) to include the fixes. Check our git for more information.
Thanks Stefan for your reply and ideas, this is good and also sad to know [not a fix : ( ]
I hope AMD could do something; it's hard to have a huge workstation while I cannot easily allocate the resource with VMs.
 
We already do all of what that thread says automatically :) The root device is created by PVE, and the bandwidth settings are detected automatically in newer versions of QEMU.

So no changes to the config necessary with regards to the AMD reset fix, but you will have to compile your own kernel (probably) to include the fixes. Check our git for more information.

It would be nice to see the passthrough Wiki updated. It’s no longer clear what works and what doesn’t, and what is now built into theKernel and what isn’t. I see more threads related to passthrough than anything else and it’s also my biggest struggle with an otherwise fantastically working product.
 
It would be nice to see the passthrough Wiki updated. It’s no longer clear what works and what doesn’t, and what is now built into theKernel and what isn’t. I see more threads related to passthrough than anything else and it’s also my biggest struggle with an otherwise fantastically working product.
On this, I also wonder if something has been updated over the passthrough setting. In particular, I found that I could see IOMMU being enabled even before I have set the flags in /etc/default/grub. In fact, I can now pass my GPU and a LSI RAID card to two guests without touching those setting.

Also, the IOMMU interrupt remapping is also working, before I have made the above change.
 
It would be nice to see the passthrough Wiki updated. It’s no longer clear what works and what doesn’t, and what is now built into theKernel and what isn’t. I see more threads related to passthrough than anything else and it’s also my biggest struggle with an otherwise fantastically working product.
We just recently overhauled the Wiki to bring it up-to-date, but PCI(e) passthrough is an extraordinarily complex topic, with many facets to it. The biggest hurdle is usually hardware support - i.e. most forum threads come to the same conclusion, that the OPs hardware was doing something funky.

We try our best to document the consistent part of PCIe passthrough, but every hardware is different, and we cannot test it all. The kernel is also constantly evolving, adding more features, enabling/disabling some and breaking others.

And then there's the good portion of threads related to AMD Navi/Vega support (like this one), which is something that the kernel never supported, and was always only available through very hacky patches (I've personally used them and trust me, they're not a fix as much as a bandaid). We do not support such patches, so the only comment on those is to build your own kernel - which you can always do, but again, that's not a configuration we can easily support or help you with then.

On this, I also wonder if something has been updated over the passthrough setting. In particular, I found that I could see IOMMU being enabled even before I have set the flags in /etc/default/grub. In fact, I can now pass my GPU and a LSI RAID card to two guests without touching those setting.

Also, the IOMMU interrupt remapping is also working, before I have made the above change.
I don't remember seeing any related changes, but as mentioned above, PCIe passthrough is (and will be for the foreseeable future) an advanced and sometimes unstable topic :)
 
  • Like
Reactions: janssensm
We just recently overhauled the Wiki to bring it up-to-date, but PCI(e) passthrough is an extraordinarily complex topic, with many facets to it. The biggest hurdle is usually hardware support - i.e. most forum threads come to the same conclusion, that the OPs hardware was doing something funky.

We try our best to document the consistent part of PCIe passthrough, but every hardware is different, and we cannot test it all. The kernel is also constantly evolving, adding more features, enabling/disabling some and breaking others.

And then there's the good portion of threads related to AMD Navi/Vega support (like this one), which is something that the kernel never supported, and was always only available through very hacky patches (I've personally used them and trust me, they're not a fix as much as a bandaid). We do not support such patches, so the only comment on those is to build your own kernel - which you can always do, but again, that's not a configuration we can easily support or help you with then.


I don't remember seeing any related changes, but as mentioned above, PCIe passthrough is (and will be for the foreseeable future) an advanced and sometimes unstable topic :)

I do appreciate this but stuff like the vfio drivers being included in the kernel is steps that should be modified in the wiki as that is same for all hardware. I will agree that it’s mostly a hardware dependant feature... maybe common error messages could be explained? If someone new to Proxmox gets Error 1 they probably won’t know what that means.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!