Your way of passing the romfile doesn't look right either. Please use exact format and bear with me.

hostpci0: 0000:01:00,pcie=1,rombar=0,romfile=Yourrombios.rom

By passing the rom file, and then use Display=Virtio-gpu during installation of the driver, you should see your screen light up (Virtio-gpu is your main, and then your GPU monitor is 2nd screen) If after installation of the driver, the screen doesn't light up, try other DP or HDMI ports.

Once the GPU as 2nd screen works ok. Reboot it and check if it works ok. Once it's working properly, you can change it to primary.
I have passed my romfile exactly as you described it before but it didn't work.

I'll try this again tho, thank you.
 
I have passed my romfile exactly as you described it before but it didn't work.

I'll try this again tho, thank you.
Sorry, mate. I actually did a research online. Looks like both Nvidia 5000 and AMD 9070 series have some sort of reset bug. (And I thought we would no longer worry about reset bug for good, but here we are. )
https://forum.level1techs.com/t/do-...series-has-reset-bug-in-vm-passthrough/228549

And the black screen issue (driver related) did somehow happen to other people bare metal too. But you said the same driver works ok for you. So I guess your version of driver is actually working ok. Looks like there might require some special tweaks for this newer type. The good thing is you are not alone.
Hopefully someone will share a working config soon.
 
Last edited:
Sorry, mate. I actually did a research online. Looks like both Nvidia 5000 and AMD 9070 series have some sort of reset bug. (And I thought we would no longer worry about reset bug for good, but here we are. )
https://forum.level1techs.com/t/do-...series-has-reset-bug-in-vm-passthrough/228549

And the black screen issue (driver related) did somehow happen to other people bare metal too. But you said the same driver works ok for you. So I guess your version of driver is actually working ok. Looks like there might require some special tweaks for this newer type. The good thing is you are not alone.
Hopefully someone will share a working config soon.
Yeah, unfortunately I stumbled across a few forums talking about this issue too. What a bummer, I sure hope this gets fix soon.

Thank you for taking the time to help me tho, I hope you have a good day :)
 
Have you missed any settings that vfio will process first?

/etc/modprobe.d/vfio.conf
-----
softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
softdep nouveau pre: vfio-pci
softdep nvidiafb pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci
-----

If that doesn't work, try adding the [vfio-pci.ids=] setting to the boot parameters.

The link below provides a solution to the problem of Windows 11 throwing an "unsupported processor" error when CPU=host.
https://forum.proxmox.com/threads/a...pdate-kb5060842-its-preview-kb5058499.166828/
 
Hi all,
Revisiting this thread ever since i fixed the issue.

If your situation is the same as mine, then this will apply to you and i hope it will save you time (and money on riser cable replacements that didnt work out for me). And this will apply if you ever built a MATX or ITX and use Riser cables for the GPU.
My issue:
1) Reboots, random hangs etc.
2) Serious crashes on the Proxmox server itself
3) All these happening within 30mins to 1 hr of the start of the Windows VM with GPU passthrough.

Test back with a Windows pure bare metal install. No problem. -->> This will immediately make you suspect that Proxmox is the problem. But not in this case.

The problems for me went away once the GPU is directly plugged into the motherboard PCIe slot without any riser cable.
so the verdict: Dont use a riser cable if you want to install anything other than baremetal Windows.

And the current setup with the GPU passthrough in this setup is super easy. No messy or elaborate configs required on the VM and nothing at all special at all at the /proc/cmdline (i boot via UEFI). Just ''intel_iommu=on iommu=pt' will do.

So Proxmox is a good product. Its these PCIe 4.0 riser cables that create issues. In my case, no amount of motherboard update or change of riser cables ever fixed the problem. It is just purely a hardware issue with these riser cables.

So if your problem starts like mine, and you verified it by taking everything out of the casing to assemble and test every part and connection, and then when you want to place it back into the casing and it has a riser cable, then i hope my experience will save you the time! Happy Proxmoxing!
 
Hi all,
Revisiting this thread ever since i fixed the issue.

If your situation is the same as mine, then this will apply to you and i hope it will save you time (and money on riser cable replacements that didnt work out for me). And this will apply if you ever built a MATX or ITX and use Riser cables for the GPU.
My issue:
1) Reboots, random hangs etc.
2) Serious crashes on the Proxmox server itself
3) All these happening within 30mins to 1 hr of the start of the Windows VM with GPU passthrough.

Test back with a Windows pure bare metal install. No problem. -->> This will immediately make you suspect that Proxmox is the problem. But not in this case.

The problems for me went away once the GPU is directly plugged into the motherboard PCIe slot without any riser cable.
so the verdict: Dont use a riser cable if you want to install anything other than baremetal Windows.

And the current setup with the GPU passthrough in this setup is super easy. No messy or elaborate configs required on the VM and nothing at all special at all at the /proc/cmdline (i boot via UEFI). Just ''intel_iommu=on iommu=pt' will do.

So Proxmox is a good product. Its these PCIe 4.0 riser cables that create issues. In my case, no amount of motherboard update or change of riser cables ever fixed the problem. It is just purely a hardware issue with these riser cables.

So if your problem starts like mine, and you verified it by taking everything out of the casing to assemble and test every part and connection, and then when you want to place it back into the casing and it has a riser cable, then i hope my experience will save you the time! Happy Proxmoxing!
Hi, thank you for your response but I've actually already fixed the problem but I forgot to mark the issue as solved :).
I can't mark the issue as solved because the time windows to edit the original post is over, so I am just going to list it here.

The solution for me was a kernel parameter which I had to use when installing proxmox because it didn't the installer wouldn't recognize any of my disk so I had to use pci=nommconf in the installer's grub parameters. This, in turn, kinda "baked in" pci=nommconf into /proc/cmdline and I could remove it in anyway since it wasn't in /etc/default/grub.
That, unbeknownst to me, caused the passthrough not to work properly and cause the issues I previously listed.

At one point I realized that that parameter was the only thing I had never changed so I tried installing proxmox without it and 1. it recognized my disks without it (probably because they were already initialized with GPT) and 2. the passthrough WORKED!

So SOLUTION: check if you have pci=nommconf anywhere in your configuration and remove it if you can or just reinstall proxmox from scratch.

Thank you everyone for your help!
 
Last edited: