PCI passthrough issue... losing network connection.

clavismil

New Member
May 21, 2023
3
0
1
Hello, I'm Proxmox newbie, this is my first time posting here.

I am trying to do PCI passthrough to pass a NIC and GPU to some VMs. I configured everything following this guide:

https://pve.proxmox.com/wiki/PCI_Passthrough#PCI_Express_Passthrough

Everything had been working well using only the NIC, for 2 weeks... today I wanted to try passing the GPU (P400 Quadro) and it didn't work... but now after installing the GPU I ran to some networking issue which is driving me crazy.

The only way to get a proper connection in proxmox is removing the NIC (and without the GPU). I don't understand what happened, can't even get a 'ping' back from the gateway. Since I don't have a mini displayport cable at hand (I will buy one tomorrow) and don't get connection I can't see what is happening when the GPU is installed.

If I remove the NIC I get connection and everything works. I'm uncertain if I did something wrong with the configuration. How can I troubleshoot the issue when the GPU is installed?

This is my /etc/network/interfaces files:

Code:
root@proxmox:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp1s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.0.104/24
        gateway 192.168.0.1
        bridge-ports enp1s0
        bridge-stp off
        bridge-fd 0

From this output I thought it was ok, is it? Without NIC and GPU.

Code:
root@proxmox:~# cat /proc/cmdline; for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
BOOT_IMAGE=/boot/vmlinuz-5.15.107-1-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on
IOMMU group 0 00:00.0 Host bridge [0600]: Intel Corporation 8th Gen Core 4-core Desktop Processor Host Bridge/DRAM Registers [Coffee Lake S] [8086:3e1f] (rev 08)
IOMMU group 1 00:02.0 VGA compatible controller [0300]: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] [8086:3e91]
IOMMU group 2 00:12.0 Signal processing controller [1180]: Intel Corporation Cannon Lake PCH Thermal Controller [8086:a379] (rev 10)
IOMMU group 3 00:14.0 USB controller [0c03]: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller [8086:a36d] (rev 10)
IOMMU group 3 00:14.2 RAM memory [0500]: Intel Corporation Cannon Lake PCH Shared SRAM [8086:a36f] (rev 10)
IOMMU group 4 00:16.0 Communication controller [0780]: Intel Corporation Cannon Lake PCH HECI Controller [8086:a360] (rev 10)
IOMMU group 5 00:17.0 SATA controller [0106]: Intel Corporation Cannon Lake PCH SATA AHCI Controller [8086:a352] (rev 10)
IOMMU group 6 00:1c.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #8 [8086:a33f] (rev f0)
IOMMU group 7 00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:a303] (rev 10)
IOMMU group 7 00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS [8086:a348] (rev 10)
IOMMU group 7 00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller [8086:a323] (rev 10)
IOMMU group 7 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller [8086:a324] (rev 10)
IOMMU group 8 01:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 16)

root@proxmox:~# dmesg | grep -e DMAR -e IOMMU
[    0.008578] ACPI: DMAR 0x00000000895B9B48 0000A8 (v01 INTEL  EDK2     00000002      01000013)
[    0.008599] ACPI: Reserving DMAR table memory at [mem 0x895b9b48-0x895b9bef]
[    0.049298] DMAR: IOMMU enabled
[    0.130979] DMAR: Host address width 39
[    0.130980] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.130983] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.130985] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.130988] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.130989] DMAR: RMRR base: 0x00000089450000 end: 0x0000008946ffff
[    0.130990] DMAR: RMRR base: 0x0000008b000000 end: 0x0000008f7fffff
[    0.130992] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.130993] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.130994] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.132668] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    3.627512] DMAR: No ATSR found
[    3.627512] DMAR: No SATC found
[    3.627513] DMAR: IOMMU feature fl1gp_support inconsistent
[    3.627514] DMAR: IOMMU feature pgsel_inv inconsistent
[    3.627515] DMAR: IOMMU feature nwfs inconsistent
[    3.627516] DMAR: IOMMU feature pasid inconsistent
[    3.627516] DMAR: IOMMU feature eafs inconsistent
[    3.627517] DMAR: IOMMU feature prs inconsistent
[    3.627517] DMAR: IOMMU feature nest inconsistent
[    3.627518] DMAR: IOMMU feature mts inconsistent
[    3.627518] DMAR: IOMMU feature sc_support inconsistent
[    3.627519] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    3.627520] DMAR: dmar0: Using Queued invalidation
[    3.627522] DMAR: dmar1: Using Queued invalidation
[    3.628863] DMAR: Intel(R) Virtualization Technology for Directed I/O

This is 'ip address' command when the NIC is not present... Here is how it looks when I install the NIC

Code:
root@proxmox:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether e0:d5:5e:8a:4f:15 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e0:d5:5e:8a:4f:15 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.104/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::e2d5:5eff:fe8a:4f15/64 scope link
       valid_lft forever preferred_lft forever
4: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:7a:ad:2f:e1:54 brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr103i0 state UNKNOWN group default qlen 1000
    link/ether da:a7:7b:7a:f2:16 brd ff:ff:ff:ff:ff:ff
6: fwbr103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2a:11:05:fe:15:4b brd ff:ff:ff:ff:ff:ff
7: fwpr103p0@fwln103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 4a:6f:ef:a9:01:80 brd ff:ff:ff:ff:ff:ff
8: fwln103i0@fwpr103p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr103i0 state UP group default qlen 1000
    link/ether 4a:8c:93:8e:0a:1d brd ff:ff:ff:ff:ff:ff
9: tap104i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr104i0 state UNKNOWN group default qlen 1000
    link/ether 9a:ce:e8:8a:41:e9 brd ff:ff:ff:ff:ff:ff
10: fwbr104i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4a:90:c2:d3:64:90 brd ff:ff:ff:ff:ff:ff
11: fwpr104p0@fwln104i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 96:f0:47:f1:ee:12 brd ff:ff:ff:ff:ff:ff
12: fwln104i0@fwpr104p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr104i0 state UP group default qlen 1000
    link/ether d2:4d:8f:e1:ad:17 brd ff:ff:ff:ff:ff:ff
13: veth106i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:10:6d:69:f2:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
14: veth107i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:9a:b6:15:4e:98 brd ff:ff:ff:ff:ff:ff link-netnsid 2


Thanks for reading, any help is really appreciated.
 
Last edited:
Please be aware that adding or removing PCI(e) devices can change the PCI ID of other devices. Since network device names are based on the PCI ID, you might have to correct the network configuration accordingly after adding the GPU. This happens on the Proxmox host but it might also happen to VMs with passthrough. Maybe this is what's tripping you up?
 
  • Like
Reactions: clavismil
Thanks leesteken, solved now. I think that's what happened. Now that I have an adapter to connect the GPU I notice that the name of the NIC changed from enp1s0 became enp2s0... After correcting that in the interfaces file and ifreload -a the connection works.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!