Intel NIC Disappeared

deltamikealpha

New Member
May 19, 2023
8
0
1
Evening all!

When I built my PVE server, the system detected a 4 port Intel NIC installed in the system - and gave it the device names of enp1s0f0 through enp1s0f4.

The config for these still exists - but they're all marked as "Active: No". I don't know exactly when these disappeared as I had no use for them when I first built the system - I've come to now and run into this issue. There's not an awful lot going on, IOMMU is in use to pass through an HBA to TrueNAS.

An ifup enp1s0f0 I get:
warning: enp1s0f0: interface not recognized - please check interface configuration

dmesg shows the device:
root@proxmox:~# dmesg|grep Intel
[ 0.000000] Intel GenuineIntel
[ 0.394422] smpboot: CPU0: Intel(R) Xeon(R) E-2136 CPU @ 3.30GHz (family: 0x6, model: 0x9e, stepping: 0xa)
[ 0.394515] Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver.
[ 0.620900] DMAR: Intel(R) Virtualization Technology for Directed I/O
[ 1.430385] igb: Intel(R) Gigabit Ethernet Network Driver
[ 1.430387] igb: Copyright (c) 2007-2014 Intel Corporation.
[ 1.446708] idma64 idma64.0: Found Intel integrated DMA 64-bit
[ 1.555011] igb 0000:06:00.0: Intel(R) Gigabit Ethernet Network Connection
[ 1.610961] igb 0000:06:00.1: Intel(R) Gigabit Ethernet Network Connection
[ 1.669180] igb 0000:06:00.2: Intel(R) Gigabit Ethernet Network Connection
[ 1.725868] igb 0000:06:00.3: Intel(R) Gigabit Ethernet Network Connection


No block of 4 consecutive adapters shows in an ip addr or ip link.

In lspci, the card is there and correct, an an lspci -v shows they're using the module vfio-pci and driver igb - which I'd expect.

A combination of other posts around the same topic lead me to try all of the above - but I can't really work out what next.

Any ideas please??
 
Last edited:
It would help to have the output of the commands you ran. The interface names are not stable and depend on PCI cards, kernel versions etc.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Sure thing:

All VMs/CT currently powered off.

Code:
root@proxmox:~# dmesg|grep Intel
[ 0.000000] Intel GenuineIntel
[ 0.394422] smpboot: CPU0: Intel(R) Xeon(R) E-2136 CPU @ 3.30GHz (family: 0x6, model: 0x9e, stepping: 0xa)
[ 0.394515] Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver.
[ 0.620900] DMAR: Intel(R) Virtualization Technology for Directed I/O
[ 1.430385] igb: Intel(R) Gigabit Ethernet Network Driver
[ 1.430387] igb: Copyright (c) 2007-2014 Intel Corporation.
[ 1.446708] idma64 idma64.0: Found Intel integrated DMA 64-bit
[ 1.555011] igb 0000:06:00.0: Intel(R) Gigabit Ethernet Network Connection
[ 1.610961] igb 0000:06:00.1: Intel(R) Gigabit Ethernet Network Connection
[ 1.669180] igb 0000:06:00.2: Intel(R) Gigabit Ethernet Network Connection
[ 1.725868] igb 0000:06:00.3: Intel(R) Gigabit Ethernet Network Connection

Code:
root@proxmox:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 2c:ea:7f:d8:dc:7a brd ff:ff:ff:ff:ff:ff
    altname enp5s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:ea:7f:d8:dc:7b brd ff:ff:ff:ff:ff:ff
    altname enp5s0f1
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2c:ea:7f:d8:dc:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.8.2/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::2eea:7fff:fed8:dc7a/64 scope link
       valid_lft forever preferred_lft forever
9: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 32:ec:91:d4:45:84 brd ff:ff:ff:ff:ff:ff
10: vmbr2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 36:b8:87:0b:d2:83 brd ff:ff:ff:ff:ff:ff
21: veth910i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:8c:88:dc:03:ec brd ff:ff:ff:ff:ff:ff link-netnsid 0
29: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 72:68:97:19:19:7a brd ff:ff:ff:ff:ff:ff

Code:
root@proxmox:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 2c:ea:7f:d8:dc:7a brd ff:ff:ff:ff:ff:ff
    altname enp5s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 2c:ea:7f:d8:dc:7b brd ff:ff:ff:ff:ff:ff
    altname enp5s0f1
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2c:ea:7f:d8:dc:7a brd ff:ff:ff:ff:ff:ff
9: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 32:ec:91:d4:45:84 brd ff:ff:ff:ff:ff:ff
10: vmbr2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 36:b8:87:0b:d2:83 brd ff:ff:ff:ff:ff:ff
21: veth910i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:8c:88:dc:03:ec brd ff:ff:ff:ff:ff:ff link-netnsid 0
29: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 72:68:97:19:19:7a brd ff:ff:ff:ff:ff:ff

Code:
trimmed lspci -v .. can display full if useful

06:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter I350-T4
        Flags: fast devsel, IRQ 16, IOMMU group 9
        Memory at 92300000 (32-bit, non-prefetchable) [size=1M]
        Memory at 92780000 (32-bit, non-prefetchable) [size=16K]
        Expansion ROM at 92700000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number a0-36-9f-ff-ff-a1-be-a4
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1a0] Transaction Processing Hints
        Capabilities: [1c0] Latency Tolerance Reporting
        Capabilities: [1d0] Access Control Services
        Kernel driver in use: vfio-pci
        Kernel modules: igb

06:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter I350-T4
        Flags: fast devsel, IRQ 17, IOMMU group 9
        Memory at 92400000 (32-bit, non-prefetchable) [size=1M]
        Memory at 927c4000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number a0-36-9f-ff-ff-a1-be-a4
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1a0] Transaction Processing Hints
        Capabilities: [1d0] Access Control Services
        Kernel driver in use: vfio-pci
        Kernel modules: igb

06:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter I350-T4
        Flags: fast devsel, IRQ 18, IOMMU group 9
        Memory at 92500000 (32-bit, non-prefetchable) [size=1M]
        Memory at 92808000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number a0-36-9f-ff-ff-a1-be-a4
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1a0] Transaction Processing Hints
        Capabilities: [1d0] Access Control Services
        Kernel driver in use: vfio-pci
        Kernel modules: igb

06:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter I350-T4
        Flags: fast devsel, IRQ 19, IOMMU group 9
        Memory at 92600000 (32-bit, non-prefetchable) [size=1M]
        Memory at 9284c000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number a0-36-9f-ff-ff-a1-be-a4
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1a0] Transaction Processing Hints
        Capabilities: [1d0] Access Control Services
        Kernel driver in use: vfio-pci
        Kernel modules: igb
 
Last edited:
Looks like you did a PCIe passthrough of the 4-port network controller or any other device of IOMMU group 9. Either by starting a VM with passthrough of one (or more) of the devices of IOMMU group 9, which makes every device in IOMMU group 9 inaccessible by the Proxmox host until a reboot. Or by early binding the four functions of the device to vfio-pci in a file in /etc/modprobe.d/, which makes the device inaccessible by the Proxmox host until you undo that change.
 
Looks like you did a PCIe passthrough of the 4-port network controller or any other device of IOMMU group 9. Either by starting a VM with passthrough of one (or more) of the devices of IOMMU group 9, which makes every device in IOMMU group 9 inaccessible by the Proxmox host until a reboot. Or by early binding the four functions of the device to vfio-pci in a file in /etc/modprobe.d/, which makes the device inaccessible by the Proxmox host until you undo that change.
I'd shut down all the VMs and stopped autostart - but it looks as though everything gets added to an IOMMU group based on it's physical layout? I'd moved slots earlier on in this process - it'd previously been in IOMMU group 1 (where the other HBA now is) - and this took it's place in IOMMU group 9 as the only device

After some fiddling I completely removed all of the enp1s0f* devices from /etc/networks/interfaces and rebooted --- it created enp6s0f1-4 - all active and working.

Genuinely, I don't know what part of it actually solved - but I'd rebooted a few times without any change.

It'd seem odd when the interface names are almost arbitrary that removing old names would force it to create new?

Thanks for your help folks - still interested in replies because I'd like to understand, but my immediate issue's fixed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!