GPU Not getting detected on proxmox

Azima

New Member
Jul 9, 2025
4
1
1
Hello Guys,

Server: PowerEdge r630

I have 2 RTX 5090 GPUs, I got a riser and attached the first one and was working perfectly without any issues. Then, I connected the second one through the butterfly-riser PCI.

1752088844945.png

And the second one was not getting detected then when i removed the second one and checked all the hardware parts, Everything is nice. Finally, I moved the first one and attached it back after checking on it also,. And now it doesn't detect any of both GPUs, I tried on different slots But still have the same issue. ( Plus i searched a lot & Try a lot of different solutions, Non worked with me )

I used these guides at the start and all was working, ( Note: I reset the BIOS settings and Proxmox still not detecting anything anymore.):

https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE
https://bobcares.com/blog/proxmox-gpu-passthrough/
https://gist.github.com/KasperSkytte/6a2d4e8c91b7117314bceec84c30016b

Code:
root@proxmox:~# lspci | grep nv
root@proxmox:~# lspci -nn | grep N
01:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe [14e4:165f]
01:00.1 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe [14e4:165f]
02:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe [14e4:165f]
02:00.1 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe [14e4:165f]
root@proxmox:~# lspci -nn | grep NV
root@proxmox:~# lspci -nn | grep Nv
root@proxmox:~# lspci -nn | grep VGA
0a:00.0 VGA compatible controller [0300]: Matrox Electronics Systems Ltd. G200eR2 [102b:0534] (rev 01)

Thanks.
 
Last edited:
Hi

I'm not an expert on this but depending on your exact hardware config you may not have enough PCIe lanes for 2 GPU's.
That depends on the amount of NIC's / HBA / RAID Controller or if you use a direct attached NVME Backplane ...

Another problem could be power supply, the 5090's need at least 575 Watt, even if you have dual 1100 Watt power supply's that could be a problem.

Dell R630 spec sheet
 
Hi

I'm not an expert on this but depending on your exact hardware config you may not have enough PCIe lanes for 2 GPU's.
That depends on the amount of NIC's / HBA / RAID Controller or if you use a direct attached NVME Backplane ...

Another problem could be power supply, the 5090's need at least 575 Watt, even if you have dual 1100 Watt power supply's that could be a problem.

Dell R630 spec sheet
I got your point buddy, But both GPUs not working now, It's not detected noth of them.
 
LnxBil means if you use a dual socket server with only one CPU then maybe not all PCIe slots will work because they are directly attached to the CPU's, not the chipset.
 
Last edited: