Node inaccessible after attempting PCI passthrough (No Route to Host)

r_davis06

New Member
Jul 10, 2024
3
0
1
NOT a production environment

Cluster Information:

2 nodes:
  • pve @ x.x.x.171
  • pve-ii@ x.x.x.172
    • Rosewill RSV-Z3200U | Ryzen 5 5600G | 32 GB Non-ECC Mem | Proxmox VE 8.1.4
1 PBS:
  • pve-backups @ x.x.x.173
Issue:
On pve-ii, VM 101 (truenas), I previously had two 4 TB HDDs directly mapped to the VM via scsi. This worked fine, but they were used Desktop HDDs and failed. So, I wised up and bought two new 4 TB HDDs (Seagate Ironwolf ST4000VN006). After detaching the original drives via the Web GUI, I went to map the new drives via SATA Controller PCI passthrough. However, after configuring the controller to pass through, the node lost internet connection.

I connected to the server with a monitor, mouse, & keyboard, and removed the passthrough by editing /etc/pve/qemu-server/101.conf

That did not fix my problem, so I removed the vm with qm destroy 101

At this point, I reboot and the node is still offline. But there is a slight difference: Before I deleted the VM, when running pvesm status it would fail to connect to my PBS with the error Network is unreachable but it now says No route to host

I verified that the /etc/network/interfaces file was configured properly , and ran ip commands to check the address and gateway, those are also correct. ip link show reports that the state of the lo and vmbr0 interfaces are unknown. dmesg | grep -i eth shows that vmbr0 is disabled. Trying to manually enable it does not work.

I've been troubleshooting and referring to online forums day. I have backups of my data, but I don't want to spend hours reconfiguring the system after a clean install. I feel like there's a simple solution, but I'm having a hard time finding it. Any help is appreciated, if you need more details I will provide at request. TYIA
 
After detaching the original drives via the Web GUI, I went to map the new drives via SATA Controller PCI passthrough. However, after configuring the controller to pass through, the node lost internet connection.
PCI passthrough won't passthrough single devices, it will passthrough whole IOMMU groups. Make sure your disk controller is the only device in its IOMMU group. Usually when trying to passthrough the onboard disk controller you also passthrough the NIC, USB and soundcard as most of the time all those onboard devices are attached to the mainboards chipset and share the same IOMMU group.

But this then should be solved after rebooting the node and not starting that VM again.

Did you check if your NIC name still matches the config file in case as these will change when adding or removing any PCIe device.
 
Did you check if your NIC name still matches the config file in case as these will change when adding or removing any PCIe device.

When I run ip commands, such as link show, I can only see the loopback lo and virtual interfaces vmbr0 and
veth201i0@if2.

But I can confirm the interfaces are being detected by the system, as they appear when running lspci
 
Last edited:
Two more days of troubleshooting with no luck.

Reinstalling the host, good thing backups exist. Moral of the story: Beware when using PCI passthrough.

Really wish this one could've gotten sorted out, seeing a lot of folks having similar issues.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!