It is simply: Rebar off, Disconnect internet. Install prox iso 8.2 , set the blacklist file as read properly the guide and just the official one. no vbios stuff there either.
And default install and run a xubuntu live iso and paf: video out.
then read about error43. If you spend less than 2 days on that, you cannot get a proper use.
follow that. do i mention any update ? no. all without internet and all do run.
 
It is simply: Rebar off, Disconnect internet. Install prox iso 8.2 , set the blacklist file as read properly the guide and just the official one. no vbios stuff there either.
And default install and run a xubuntu live iso and paf: video out.
then read about error43. If you spend less than 2 days on that, you cannot get a proper use.
follow that. do i mention any update ? no. all without internet and all do run.
Hi, are you suggesting that I should downgrade to proxmox ve 8.2?

Also, if you read my other responses, you would have noticed that I mentioned trying vbios as a "last resort" and it didn't work either with or without it.

It has been almost a month since I had this issue and I can assure you that I've done extensive research on the topic, especially error 43 (which doesn't show up in my case).
I have checked my configuration against many trusted sources, especially the official documentation and I have redone it 3 times without any success.

Finally, could you elaborate further on this line please? I didn't really understand what I should do.
And default install and run a xubuntu live iso and paf: video out.

Anyways, thank you for taking the time to respond to me.
 
Hi, thank you for your response.

All this romfile=vbios.bin,vendor-id=0x10de,x-vga=1 is completely unnecessary.

As I've already said, I tried vbios as a possible solution and not as my default configuration.

You need to pass through the entire NVIDIA card (which is typically a GPU and an HDMI audio)

0000:01:00.0 is a virtual function of the card, remove the last .0, make sure your Proxmox has it blacklisted and no nouveau or nVIDIA drivers there load.
I have tried to pass both the single virtual function and the entire card (without .0) and it didn't work. The rest of the configuration (blacklisted drivers, etc..) is correct.

For ACS isolation issues, move the card to a different slot that doesn’t have isolation issues, any decent motherboard should have at least one x16 that’s not shared with other devices.
I don't think my problem relates to ACS isolation, since passthrough works until I install the driver, and even if I wanted to the other 2 16x slots on my motherboard only work at PCIe 1x.
 
i don't know if it will be of any help at all, but when trying to pass my Intel ARC A310 to my windows 10/11 VMs, with it being the primary GPU with no iGPU (using i7-7820X cpu and Gigabyte X299 ud4 mbo), i had to remove and rescan it to basically take it away from proxmox, since proxmox by default has control over it for terminal video output, etc as the main gpu. although it wouldn't really passthrough at all until i did this, so sounds like a totally different problem since you have gotten it to pass before the driver install, where as with mine it would error out since its the primary gpu.

here are the commands from my terminal, that i used in the process, its been some months since i did it so i don't remember perfectly but i went back in my history and grabbed the commands.

Code:
echo 1 > /sys/bus/pci/devices/0000\:67\:00.0/remove # the hdmi audio controller
echo 1 > /sys/bus/pci/devices/0000\:68\:00.0/remove # the GPU
echo 1 > /sys/bus/pci/devices/0000\:68\:00.0/rescan # rescan GPU
echo 1 > /sys/bus/pci/rescan # rescan all
ls /sys/bus/pci/devices/0000\:68\:00.0

# replace 67-68 with device IIMOU group ID number

NOTE: if this is the problem, not all motherboards will allow it, some reboot instantly if the GPU is taken from the OSH
Which program should i use to execute these code? Do I need to use command prompt or Powershell? Thanks
 
Last edited:
Your BIOS setting doesn't look right.

Resizable BAR: Disabled (Not Enabled)
Above 4G decoding: Disabled (Not Enabled)
IOMMU: Enabled (Not AUTO)
Prefered GPU Setting : External (looks like in your case, you need to choose egpu first on your boot if that's the only windows VM you boot up)

Only with those settings done, then you can proceed with GPU passthrough. Once you get it working, maybe you can try relax those settings and see if it works without it.

If I understand you correctly, you are trying to blacklist the nvidia driver on the host, right? Try with the following

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset amd_iommu=on iommu=pt initcall_blacklist=sysfb_init"

Vfio modules in /etc/modules:

Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd



For /etc/modprobe.d/vfio.conf, use (I've got multiple gpu, all like this (including iGPU))
Code:
options vfio-pci ids=xxxxx,xxxx disable_vga=1


For blacklist, I have many gpus and basically blacklisted all drivers
/etc/modprobe.d/pve-blacklist.conf

Code:
blacklist nvidiafb
blacklist radeon
blacklist nouveau
blacklist nvidia
blacklist amdgpu
blacklist snd_hda_intel

softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci

After the change,update-grub and update-initramfs -u -k all before rebooting.
Once you get it working, then simplify/relax some of the settings. (Some of them are not required, you might get away by removing them)
 
Last edited:
Your BIOS setting doesn't look right.

Resizable BAR: Disabled (Not Enabled)
Above 4G decoding: Disabled (Not Enabled)
IOMMU: Enabled (Not AUTO)
Prefered GPU Setting : External (looks like in your case, you need to choose egpu first on your boot if that's the only windows VM you boot up)

Only with those settings done, then you can proceed with GPU passthrough. Once you get it working, maybe you can try relax those settings and see if it works without it.

If I understand you correctly, you are trying to blacklist the nvidia driver on the host, right? Try with the following

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset amd_iommu=on iommu=pt initcall_blacklist=sysfb_init"

Vfio modules in /etc/modules:

Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd



For /etc/modprobe.d/vfio.conf, use (I've got multiple gpu, all like this (including iGPU))
Code:
options vfio-pci ids=xxxxx,xxxx disable_vga=1


For blacklist, I have many gpus and basically blacklisted all drivers
/etc/modprobe.d/pve-blacklist.conf

Code:
blacklist nvidiafb
blacklist radeon
blacklist nouveau
blacklist nvidia
blacklist amdgpu
blacklist snd_hda_intel

softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci

After the change,update-grub and update-initramfs -u -k all before rebooting.
Once you get it working, then simplify/relax some of the settings. (Some of them are not required, you might get away by removing them)
Thank you for the reply. My current setting is in Hyper V VM and GPU passthrough using GPU partitioning. The game I used to play wouldn't start in Hyper V vms anymore. So i installed Proxmox VE in a different drive. First, I don't know how to use Proxmox VE. I learned I need another device to access their Web interface so I am planning to buy a cheap PC. I am not sure what " Blacklist" thing is. That must be Proxmox VE code.
 
Your BIOS setting doesn't look right.

Resizable BAR: Disabled (Not Enabled)
Above 4G decoding: Disabled (Not Enabled)
IOMMU: Enabled (Not AUTO)
Prefered GPU Setting : External (looks like in your case, you need to choose egpu first on your boot if that's the only windows VM you boot up)

Only with those settings done, then you can proceed with GPU passthrough. Once you get it working, maybe you can try relax those settings and see if it works without it.

If I understand you correctly, you are trying to blacklist the nvidia driver on the host, right? Try with the following

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset amd_iommu=on iommu=pt initcall_blacklist=sysfb_init"

Vfio modules in /etc/modules:

Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd



For /etc/modprobe.d/vfio.conf, use (I've got multiple gpu, all like this (including iGPU))
Code:
options vfio-pci ids=xxxxx,xxxx disable_vga=1


For blacklist, I have many gpus and basically blacklisted all drivers
/etc/modprobe.d/pve-blacklist.conf

Code:
blacklist nvidiafb
blacklist radeon
blacklist nouveau
blacklist nvidia
blacklist amdgpu
blacklist snd_hda_intel

softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci

After the change,update-grub and update-initramfs -u -k all before rebooting.
Once you get it working, then simplify/relax some of the settings. (Some of them are not required, you might get away by removing them)
Okay, so I've tried every suggestion you provided but unfortunately it still doesn't work :(.

However, I attached my physical monitor to the GPU and this time, instead of displaying the image like it used to (if no drivers were installed), it didn't output anything, the screen just went stand-by.
 
Also, one thing I haven't mentioned (I'll update the original post asap with all the new info) is that as soon as the driver is installed, not only does passthrough not work and the VM crash on startup, but windows in general becomes unstable, taking a lot of time to load apps and not registering clicks.
Moreover, if I disconnect from RDP after the driver is installed and I try to reconnect, it prompts me to enter the credentials, but then fails to connect after a minute of loading.
 
Also, one thing I haven't mentioned (I'll update the original post asap with all the new info) is that as soon as the driver is installed, not only does passthrough not work and the VM crash on startup, but windows in general becomes unstable, taking a lot of time to load apps and not registering clicks.
Moreover, if I disconnect from RDP after the driver is installed and I try to reconnect, it prompts me to enter the credentials, but then fails to connect after a minute of loading.
Ok. I believe the drivers are not installed correctly.
1. Pass through the whole card, full function.
You successfully passed through the device, right?
e.g when you check the device manager, it actually show you the card as unknown device?

2. How do you install the driver? You can’t install it via RDP. Please use the web to directly attach a Display=default, so you can view it on the web, connect a monitor on the card. Download full driver from nvidia, not the web minimal one. After installation of the driver you should see pictures in physical monitor.

Then you can remove the Display=None next time on reboot.

Probably DDU the driver and redo installation of driver via web gui. ( not windows rdp)

Please disable above 4G and rebar in the BIOS. Add iommu=pt in the cmdline. They might be optional, but it can be crucial too.
You need to get it working first before you can optimise it for your environment.
 
Last edited:
Ok. I believe the drivers are not installed correctly.
1. Pass through the whole card, full function.
You successfully passed through the device, right?
e.g when you check the device manager, it actually show you the card as unknown device?

2. How do you install the driver? You can’t install it via RDP. Please use the web to directly attach a Display=default, so you can view it on the web, connect a monitor on the card. Download full driver from nvidia, not the web minimal one. After installation of the driver you should see pictures in physical monitor.

Then you can remove the Display=None next time on reboot.

Probably DDU the driver and redo installation of driver via web gui. ( not windows rdp)

Please disable above 4G and rebar in the BIOS. Add iommu=pt in the cmdline. They might be optional, but it can be crucial too.
You need to get it working first before you can optimise it for your environment.

Hi, thank you for the suggestion.

I can confirm the card is passed through successfully and full function. Device manager has showed the card as unknown device since the first attempt I made.

I already tried installing via a monitor once and the same issue occurred, but I'll be trying once again just to be sure.

I already attempted with above 4G and rebar off, I wrote it in the update to my original post. iommu=pt is also in my GRUB config even if it is deprecated.
 
Hi, thank you for the suggestion.

I can confirm the card is passed through successfully and full function. Device manager has showed the card as unknown device since the first attempt I made.

I already tried installing via a monitor once and the same issue occurred, but I'll be trying once again just to be sure.

I already attempted with above 4G and rebar off, I wrote it in the update to my original post. iommu=pt is also in my GRUB config even if it is deprecated.
Make sure 4G/Rebar off, iommu=pt

I don’t think it’s obsolete, but this has been in my config for years.

After the change,update-grub and update-initramfs -u -k all before rebooting.

Then try the driver cleanup and reinstallation.

Also try with this args on the vm

args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'

It’s probably obsolete too, once the driver is installed properly, you can remove it.
 
Last edited:
Make sure 4G/Rebar off, iommu=pt

I don’t think it’s obsolete, but this has been in my config for years.

After the change,update-grub and update-initramfs -u -k all before rebooting.

Then try the driver cleanup and reinstallation.

Also try with this args on the vm

args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'

It’s probably obsolete too, once the driver is installed properly, you can remove it.
Thank you for sticking with me.

The problem is still present, even after applying your suggestions :(.
I set Display=Default and the VM detects the GPU and the GPU turns on the monitor (as usual) but with a blank screen and it should be correct because, as you said, I should see pictures after installing the drivers. So I do that and windows, as usual, detects the GPU in device manager, with no code 43 and with its proper name (RTX 5070 ti) except I cannot see anything on the monitor. Then, when I reboot (either with Display=None or Display=Default) the usual issue happens where the machine freezes and the only way to make it boot is disconnecting the GPU from the VM.

Could it be that the GPU is new and it still has reset problems? I should probably test with another older GPU.

This is really frustrating, but I am really glad for the help I've received so far, so, once again, thank you.
 
Thank you for sticking with me.

The problem is still present, even after applying your suggestions :(.
I set Display=Default and the VM detects the GPU and the GPU turns on the monitor (as usual) but with a blank screen and it should be correct because, as you said, I should see pictures after installing the drivers. So I do that and windows, as usual, detects the GPU in device manager, with no code 43 and with its proper name (RTX 5070 ti) except I cannot see anything on the monitor. Then, when I reboot (either with Display=None or Display=Default) the usual issue happens where the machine freezes and the only way to make it boot is disconnecting the GPU from the VM.

Could it be that the GPU is new and it still has reset problems? I should probably test with another older GPU.

This is really frustrating, but I am really glad for the help I've received so far, so, once again, thank you.
So after the driver installed, it becomes 43 with the black screen during the display=default?

Maybe can you try a few more things,
1. Change the monitor to another DP or hdmi port
2. In the device manager, uninstall the driver and click search again and see if it appears back as normal
3. This is very strange as Nvidia usually don’t require rom to be injected like AMD.

My guess is if you try with the previous version of nvidia it will work. This setting works for my 1070 and 3080 without issues.

Try reseat your 5070, and or even check with old 2000 series cards to see.

This is more like last resort. Try use the following during pass to VM

hostpci0: 0000:01:00,pcie=1,rombar=0,romfile=vga_5070.rom

The rom file is downloaded from vbios collections for your GPU and copy into /usr/share/kvm directory
 
Last edited:
So after the driver installed, it becomes 43 with the black screen during the display=default?

Maybe can you try a few more things,
1. Change the monitor to another DP or hdmi port
2. In the device manager, uninstall the driver and click search again and see if it appears back as normal
3. This is very strange as Nvidia usually don’t require rom to be injected like AMD. My guess is if you try with the previous version of nvidia it will work. This setting works for my 1070 and 3080 without issues.
Try use the following during pass to VM

hostpci0: 0000:01:00,pcie=1,rombar=0,romfile=vga_5070.rom

The rom file is downloaded from vbios, and copy into /usr/share/kvm directory
No, code 43 isn't the issue, I'm saying that the GPU doesn't throw code 43.
What I am saying is that the GPU outputs something but it's just a blank screen even after the drivers are installed, the only way I can actually see something is true the proxmox web UI.
And then there is the usual problem of the VM freezing at boot.

Also, I tried adding the romfile multiple times before but it never worked, so that is not a viable solution.
 
No, code 43 isn't the issue, I'm saying that the GPU doesn't throw code 43.
What I am saying is that the GPU outputs something but it's just a blank screen even after the drivers are installed, the only way I can actually see something is true the proxmox web UI.
And then there is the usual problem of the VM freezing at boot.

Also, I tried adding the romfile multiple times before but it never worked, so that is not a viable solution.
Thanks for clarifying. So the driver actually get installed correctly, I guess.

When along with default adapter, it’s black screen and you can see driver is installed properly. But when using standalone, display=none, you get black screen. Right?

That’s very strange.
Presumably your bios/fireware are the latest. The card doesn't have any issue. Does the passthrough to linux work?
 
Last edited:
No, code 43 isn't the issue, I'm saying that the GPU doesn't throw code 43.
What I am saying is that the GPU outputs something but it's just a blank screen even after the drivers are installed, the only way I can actually see something is true the proxmox web UI.
And then there is the usual problem of the VM freezing at boot.

Also, I tried adding the romfile multiple times before but it never worked, so that is not a viable solution.
Your way of passing the romfile doesn't look right either. Please use exact format and bear with me.

hostpci0: 0000:01:00,pcie=1,rombar=0,romfile=Yourrombios.rom

By passing the rom file, and then use Display=Virtio-gpu during installation of the driver, you should see your screen light up (Virtio-gpu is your main, and then your GPU monitor is 2nd screen) If after installation of the driver, the screen doesn't light up, try other DP or HDMI ports.

Once the GPU as 2nd screen works ok. Reboot it and check if it works ok. Once it's working properly, you can change it to primary.
 
Thanks for clarifying. So the driver actually get installed correctly, I guess.

When along with default adapter, it’s black screen and you can see driver is installed properly. But when using standalone, display=none, you get black screen. Right?
Yes, exactly

That’s very strange.
Presumably your bios/fireware are the latest. The card doesn't have any issue. Does the passthrough to linux work?
I can confirm my uefi is the latest version and the card doesn't have any issue since I tried installing windows bare metal on another disk and the driver gets installed without problems.

i have yet to try passthrough on a linux VM though.