Issue with getting NICs to go active

xScoutx

New Member
Dec 30, 2022
3
0
1
First time with ProxMox, planning on opnsense as my first VM to replace router.

I have an optiplex 7010 sff, with an oboard gigabit NIC and two separate 2.5gbe realtek 8125 NICs.

I have proxmox installed and running ok (7.3-3)- I also think I have opnsense ready to go, but need to get NICs assigned.

I think that the host can see the NICs as this show's the onboard and two pcie's

Code:
root@proxmox:~# lspci -v
...
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (Lewisville) (rev 04)
        DeviceName:  Onboard LAN
        Subsystem: Dell 82579LM Gigabit Network Connection (Lewisville)
        Flags: bus master, fast devsel, latency 0, IRQ 27, IOMMU group 5
        Memory at f7e00000 (32-bit, non-prefetchable) [size=128K]
        Memory at f7e39000 (32-bit, non-prefetchable) [size=4K]
        I/O ports at f080 [size=32]
        Capabilities: [c8] Power Management version 2
        Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [e0] PCI Advanced Features
        Kernel driver in use: e1000e
        Kernel modules: e1000e


...
01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
        Subsystem: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller
        Flags: fast devsel, IRQ 16, IOMMU group 1
        I/O ports at e000 [size=256]
        Memory at f7d10000 (64-bit, non-prefetchable) [size=64K]
        Memory at f7d20000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at f7d00000 [virtual] [disabled] [size=64K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] Express Endpoint, MSI 01
        Capabilities: [b0] MSI-X: Enable- Count=32 Masked-
        Capabilities: [d0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [148] Virtual Channel
        Capabilities: [168] Device Serial Number 01-00-00-00-68-4c-e0-00
        Capabilities: [178] Transaction Processing Hints
        Capabilities: [204] Latency Tolerance Reporting
        Capabilities: [20c] L1 PM Substates
        Capabilities: [21c] Vendor Specific Information: ID=0002 Rev=4 Len=100 <?>
        Kernel driver in use: vfio-pci
        Kernel modules: r8169


03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
        Subsystem: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller
        Flags: fast devsel, IRQ 16, IOMMU group 12
        I/O ports at d000 [size=256]
        Memory at f7c10000 (64-bit, non-prefetchable) [size=64K]
        Memory at f7c20000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at f7c00000 [virtual] [disabled] [size=64K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] Express Endpoint, MSI 01
        Capabilities: [b0] MSI-X: Enable- Count=32 Masked-
        Capabilities: [d0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [148] Virtual Channel
        Capabilities: [168] Device Serial Number 01-00-00-00-68-4c-e0-00
        Capabilities: [178] Transaction Processing Hints
        Capabilities: [204] Latency Tolerance Reporting
        Capabilities: [20c] L1 PM Substates
        Capabilities: [21c] Vendor Specific Information: ID=0002 Rev=4 Len=100 <?>
        Kernel driver in use: vfio-pci
        Kernel modules: r8169

I've been searching and googling this for a few days, I know that there was a previous issue with Realtek drivers, but I think that's resolved in the latest versions. I believe i have IOMMU working as well and not in the same groups. (I think I highlighted the right ones below).

Code:
root@proxmox:~# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/7/devices/0000:00:1b.0
/sys/kernel/iommu_groups/5/devices/0000:00:19.0
/sys/kernel/iommu_groups/3/devices/0000:00:14.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.2
/sys/kernel/iommu_groups/11/devices/0000:00:1f.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.3
/sys/kernel/iommu_groups/1/devices/0000:00:01.0

/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.0
/sys/kernel/iommu_groups/6/devices/0000:00:1a.0
/sys/kernel/iommu_groups/4/devices/0000:00:16.0
/sys/kernel/iommu_groups/4/devices/0000:00:16.3

/sys/kernel/iommu_groups/12/devices/0000:03:00.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/10/devices/0000:00:1d.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1c.4

I am able to get to the host obviously using the onboard gigabit nic (which is the overall plan for administration long term).
I think I created the network devices and linux bridges correctly, but something is hanging up the NICs. I do not have a link light on these at the back of the machine or at the switch when cabled up- which I'm sure means something, but not sure what.


Thank you for any help.

Other info that may help:
1672607938916.png
Code:
root@proxmox:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
4: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether b8:ca:3a:a1:8f:21 brd ff:ff:ff:ff:ff:ff
    altname enp0s25
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:a1:8f:21 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.3/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::baca:3aff:fea1:8f21/64 scope link
       valid_lft forever preferred_lft forever
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f4:a4:54:80:93:93 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f6a4:54ff:fe80:9393/64 scope link
       valid_lft forever preferred_lft forever
7: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f4:a4:54:80:92:83 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f6a4:54ff:fe80:9283/64 scope link
       valid_lft forever preferred_lft forever
16: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr100i0 state UNKNOWN group default qlen 1000
    link/ether 2e:ef:8d:df:9c:21 brd ff:ff:ff:ff:ff:ff
17: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7e:c4:aa:a5:46:db brd ff:ff:ff:ff:ff:ff
18: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether e2:17:e9:b7:94:c3 brd ff:ff:ff:ff:ff:ff
19: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether 06:e9:e7:f1:1b:81 brd ff:ff:ff:ff:ff:ff
20: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr100i1 state UNKNOWN group default qlen 1000
    link/ether 42:13:ca:25:ef:c2 brd ff:ff:ff:ff:ff:ff
21: fwbr100i1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 76:ae:82:b1:2b:0c brd ff:ff:ff:ff:ff:ff
22: fwpr100p1@fwln100i1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether c2:1d:c6:15:e1:43 brd ff:ff:ff:ff:ff:ff
23: fwln100i1@fwpr100p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i1 state UP group default qlen 1000
    link/ether 62:33:94:53:bf:11 brd ff:ff:ff:ff:ff:ff

Code:
root@proxmox:~# lspci | grep 'Ethernet'
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (Lewisville) (rev 04)
01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)

Code:
root@proxmox:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!


auto lo
iface lo inet loopback


iface eno1 inet manual


iface enp1s0 inet manual


iface enp3s0 inet manual


auto vmbr0
iface vmbr0 inet static
        address 192.168.1.3/24
        gateway 192.168.1.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
#Onboard INTEL 1g NIC


auto vmbr1
iface vmbr1 inet manual
        bridge-ports enp1s0
        bridge-stp off
        bridge-fd 0
#WAN Realtek 2.5gbe NIC


auto vmbr2
iface vmbr2 inet manual
        bridge-ports enp3s0
        bridge-stp off
        bridge-fd 0
#LAN Realtek 2.5gbe NIC
 
Neither enp1s0 or enp3s0 are showing up with ip a, so they are not available to use.

just in case, did you attempt to do iommu passthrough of the pci card to opnsense? Can't do that and use the same nic as the port for a Linux bridge. if not, maybe consider trying the newest 6.1 kernel, pinnED thread to- of installation forum and see if newer drivers work for you.
 
I was thinking I needed to do iommu in order to use the NIC in opnsense and I was thinking that the linux bridge was the passthrough. I'll go bulk on syntax and I'll go look at that thread - and come back.
 
So you can do either:

1: IOMMU passthrough and select the PCI device directly from the OPNsense vm network hardware setup. Then Proxmox has no way to see either nic.
2. No IOMMU, but use the Linux bridge directly as the OPNsense wan and lan. Both Proxmox and OPNsense have access to the nic.
 
Well... a huge thankyou! Updating newest kernel fixed it after a reboot. I still need to read up on the difference/benefits of passthrough vs bridge, but appreciate you getting me here.

1672614913257.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!