[SOLVED] Network port not active and not working despite "autostart" - don't understand why?

virtManager

Member
Jun 11, 2020
28
4
23
45
Hi,

I'm running Proxmox with pfSense virtualized and have been doing that for around 1,5 years now... Now I'm migrating to new hardware and began messing with the configuration. Out of the box Proxmox works just nice with a static IP AFAIR. What I usually do is quickly to install pfSense virtualized and use that as a firewall/router. But once in a while I screw things up and lock myself up (mostly in the beginning). In any case, I would really like to be able to ssh my Proxmox host, even if the pfSense-VM isn't running or is malfunctioning. Here's the configuration, mostly working:

1702514516811.png

However, it's the last of the 4 Intel NIC ports (enp1s0f3) isn't working a I expect. As you see I pass through the first 3 physical ports. Then I have a built-in Realtek port which I successfully have bridged to vmbr0 - all that is good. Everything except the blue line above is working as I expect... What I would expect now is that if I insert a network cable into port 4 (=enp1s0f3) and then plug the other end into a laptop with wifi disabled, then I might have to statically configure the IP address and netmask, but from my laptop I expect to be able to ssh or log into Proxmox using the IP-address 192.168.1.15... But this isn't working... In fact, even though "Autostart" is "Yes", the network device never becomes "Active" (it just says "No"), furthermore the lights aren't blinking... I don't expect that, as this port should have direct access to the Proxmox host...

What is it I'm doing wrong? Below I some additional info... It's probably just a really small thing, please help/advice, thanks!

Code:
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 7c:d3:0a:1a:f4:d5 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:d3:0a:1a:f4:d5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.2/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::7ed3:aff:fe1a:f4d5/64 scope link
       valid_lft forever preferred_lft forever
8: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 96:3d:1a:2c:96:6a brd ff:ff:ff:ff:ff:ff
312: veth103i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:42:de:0c:2b:14 brd ff:ff:ff:ff:ff:ff link-netnsid 0

# ip l show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 7c:d3:0a:1a:f4:d5 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7c:d3:0a:1a:f4:d5 brd ff:ff:ff:ff:ff:ff
8: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 96:3d:1a:2c:96:6a brd ff:ff:ff:ff:ff:ff
311: veth103i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:27:78:e3:e5:ae brd ff:ff:ff:ff:ff:ff link-netnsid 0

/etc/network/interfaces:
auto lo
iface lo inet loopback


auto enp2s0
iface enp2s0 inet manual
#Switch port 6 (vmbr/VLAN 100) - trunk port


iface enp1s0f0 inet manual
#Passed through to pfSense - WAN


iface enp1s0f1 inet manual
#Passed through to pfSense - LAN (vlan trunk)


iface enp1s0f2 inet manual
#Passed through to pfSense - DHCP 192.168.2.0/24


auto enp1s0f3
iface enp1s0f3 inet static
    address 192.168.1.15/24
#BYPASS_PFSENSE_NOT_WORKING


auto vmbr0
iface vmbr0 inet static
    address 192.168.100.2/24
    gateway 192.168.1.1
    bridge-ports enp2s0
    bridge-stp off
    bridge-fd 0
    post-up /sbin/ethtool -s enp2s0 wol g
#Subnet for VMs (VLAN 100)
 
Hello
I don't see an obvious error, hence the question, did you also apply the config? (GUI or ifreload -a)
 
I don't see an obvious error, hence the question, did you also apply the config? (GUI or ifreload -a)
Thanks! I did apply, but I think it was better to run "ifreload -a" as that revealed the following: "warning: enp1s0f3: interface not recognized - please check interface configuration". I both tried another ethernet-cable and I also rebooted - same warning and still not working... hmm...

I also would expect "ip l show" to show enp1s0f3 and some stuff... I was thinking I might be able to see which network driver it was/is using "readlink /sys/class/net/$DEV/device/driver" but as enp1s0f3 does not appear using "ls /sys/class/net" something more fundamental is wrong... The network card is:

Code:
# lspci|grep -i
...
01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
01:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
01:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)

So, the first 3 at least can be passed through to pfSense... I don't understand why the fourth Intel I350 port doesn't work, I think it probably had worked at least in the beginning... Anyone got any good ideas? Is it something with pass-through IOMMU groups or anything (although this port isn't even being PCI-passed through)? hmm... Just got a bit extra info, that might be useful:

Code:
# dmesg |grep -i enp1s0
[    4.326117] igb 0000:01:00.0 enp1s0f0: renamed from eth1
[    4.354386] igb 0000:01:00.1 enp1s0f1: renamed from eth2
[    4.374279] igb 0000:01:00.2 enp1s0f2: renamed from eth3
[    4.389699] igb 0000:01:00.3 enp1s0f3: renamed from eth0
[   30.098857] igb 0000:01:00.2: removed PHC on enp1s0f2
[   30.280016] igb 0000:01:00.0: removed PHC on enp1s0f0
[   30.471274] igb 0000:01:00.3: removed PHC on enp1s0f3
[   30.755155] igb 0000:01:00.1: removed PHC on enp1s0f1

# networkctl
WARNING: systemd-networkd is not running, output will be incomplete.

IDX LINK      TYPE     OPERATIONAL SETUP
  1 lo        loopback n/a         unmanaged
  2 enp2s0    ether    n/a         unmanaged
  7 vmbr0     bridge   n/a         unmanaged
  8 tap101i0  ether    n/a         unmanaged
 85 veth103i0 ether    n/a         unmanaged

5 links listed.

# ip route show
default via 192.168.1.1 dev vmbr0 proto kernel onlink
192.168.100.0/24 dev vmbr0 proto kernel scope link src 192.168.100.2

# ip address flush dev enp1s0f3
Device "enp1s0f3" does not exist.
# ip l set enp1s0f3 down
Cannot find device "enp1s0f3"
# ip l set enp1s0f3 up
Cannot find device "enp1s0f3"

I'm beginning to suspect it could perhaps have something to do with IOMMU-groups or something, seems ALL NIC's have been removed from the host? Shouldn't it only be removed, if I tell it to be passed through?

Code:
# pveversion
pve-manager/7.4-17/513c62be (running kernel: 5.15.131-2-pve)

# dmesg|grep -i iommu
[    0.000000] Command line: initrd=\EFI\proxmox\5.15.131-2-pve\initrd.img-5.15.131-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on iommu=pt
[    0.101942] Kernel command line: initrd=\EFI\proxmox\5.15.131-2-pve\initrd.img-5.15.131-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on iommu=pt
[    0.744830] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.913706] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.914069] pci 0000:00:01.0: Adding to iommu group 0
[    0.914106] pci 0000:00:01.1: Adding to iommu group 0
[    0.914167] pci 0000:00:02.0: Adding to iommu group 1
[    0.914200] pci 0000:00:02.1: Adding to iommu group 1
[    0.914279] pci 0000:00:03.0: Adding to iommu group 2
[    0.914313] pci 0000:00:03.2: Adding to iommu group 2
[    0.914359] pci 0000:00:04.0: Adding to iommu group 3
[    0.914481] pci 0000:00:10.0: Adding to iommu group 4
[    0.914547] pci 0000:00:10.1: Adding to iommu group 4
[    0.914587] pci 0000:00:11.0: Adding to iommu group 5
[    0.914645] pci 0000:00:12.0: Adding to iommu group 6
[    0.914687] pci 0000:00:12.2: Adding to iommu group 6
[    0.914745] pci 0000:00:13.0: Adding to iommu group 7
[    0.914779] pci 0000:00:13.2: Adding to iommu group 7
[    0.914872] pci 0000:00:14.0: Adding to iommu group 8
[    0.914906] pci 0000:00:14.1: Adding to iommu group 8
[    0.914941] pci 0000:00:14.2: Adding to iommu group 8
[    0.914975] pci 0000:00:14.3: Adding to iommu group 8
[    0.915009] pci 0000:00:14.4: Adding to iommu group 9
[    0.915120] pci 0000:00:18.0: Adding to iommu group 10
[    0.915165] pci 0000:00:18.1: Adding to iommu group 10
[    0.915202] pci 0000:00:18.2: Adding to iommu group 10
[    0.915240] pci 0000:00:18.3: Adding to iommu group 10
[    0.915276] pci 0000:00:18.4: Adding to iommu group 10
[    0.915313] pci 0000:00:18.5: Adding to iommu group 10
[    0.915331] pci 0000:01:00.0: Adding to iommu group 1
[    0.915349] pci 0000:01:00.1: Adding to iommu group 1
[    0.915366] pci 0000:01:00.2: Adding to iommu group 1
[    0.915391] pci 0000:01:00.3: Adding to iommu group 1
[    0.915409] pci 0000:02:00.0: Adding to iommu group 2
[    0.916156] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.923594] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[   11.520307] AMD-Vi: AMD IOMMUv2 loaded and initialized

# dmesg|grep -i iommu  |grep -iP 'iommu group 1$'
[    0.914167] pci 0000:00:02.0: Adding to iommu group 1
[    0.914200] pci 0000:00:02.1: Adding to iommu group 1
[    0.915331] pci 0000:01:00.0: Adding to iommu group 1
[    0.915349] pci 0000:01:00.1: Adding to iommu group 1
[    0.915366] pci 0000:01:00.2: Adding to iommu group 1
[    0.915391] pci 0000:01:00.3: Adding to iommu group 1

# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done |grep -i net
IOMMU group 1 01:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU group 1 01:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU group 1 01:00.2 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU group 1 01:00.3 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU group 2 02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)

hmm... Does it have something to do with that when or if I pass through *ANY* device in IOMMU group 1, then all of them will be removed from the Proxmox host? Even worse: As I didn't pass through the fourth NIC, it's also not even available in the pfSense VM, where I did this command:

Code:
# pciconf -lv|grep -iE 'class.*=.*network' -A1 -B3
igb0@pci0:1:0:0:    class=0x020000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x1521 subvendor=0x8086 subdevice=0x0001
    vendor     = 'Intel Corporation'
    device     = 'I350 Gigabit Network Connection'
    class      = network
    subclass   = ethernet
igb1@pci0:2:0:0:    class=0x020000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x1521 subvendor=0x8086 subdevice=0x0001
    vendor     = 'Intel Corporation'
    device     = 'I350 Gigabit Network Connection'
    class      = network
    subclass   = ethernet
igb2@pci0:3:0:0:    class=0x020000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x1521 subvendor=0x8086 subdevice=0x0001
    vendor     = 'Intel Corporation'
    device     = 'I350 Gigabit Network Connection'
    class      = network
    subclass   = ethernet
--
virtio_pci3@pci0:6:18:0:    class=0x020000 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1000 subvendor=0x1af4 subdevice=0x0001
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio network device'
    class      = network
    subclass   = ethernet

I'm guessing that if I PCI pass through *ANY* of the Intel I350 NICs, then they'll all be unavailable in the Proxmox host - but only the ones I've passed through will be available in the VM... So it makes no sense to not pass through all of them as I cannot access all 4 NICs if I pass through any of them to the VM? That is my current hypothesis, I remember that perhaps my hardware isn't really the best for PCI passthrough? Can anyone confirm if this idea is correct?

I appreciate any ideas/suggestions, thanks!
 
Last edited:
I only skimmed over the thread, but yes, you can only PCIe-passthrough a whole IOMMU-group and since all four NICs are in the same IOMMU-group...
 
Hi @hd-- and @Neobin - ok, I suppose this explains it all then... Thank you very much, mystery has been solved. I can see there's no need to not pass through everything in a group then, as those devices are unusable to the host, once anything in the IOMMU group has been passed through.

Thanks a lot, I hope this can help someone else in the future!
 
  • Like
Reactions: hd--

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!