Good afternoon folks!
I have been using Proxmox for a few years now, and have been incredibly happy with the results. However, lack of planning on my part for a catastrophic event has me starting over from scratch.
I had originally used IOMMU and Start PCI Device to port an entire 4-port NIC for dedicated use by one of my VMs: pfSense.
This time around, I am unable to get IOMMU to work readily, and while I am having flashbacks on how much of a nightmare it was getting IOMMU setup in the first place, I now sort of want to avoid going the IOMMU route this time around.
I had thought that I would be unable to use that 4-port NIC anymore, as it seems that the card would just disappear from the "ip address" listing and the "lspci" listing as well. I had figured that the card was defective and bought a replacement 2-port 2.5Gb NIC to replace it with. However, the new NIC now suffers the same issue, to the point where I am suspecting that either the card-slot is defective, or something else is going on.
As you can see, ports 2-5 are my built-in 4-port NIC, and 6 & 7 are the new 2.5Gb NIC ports, and 8-11 are the VMBR ports.
I can confirm that 6 & 7 light-up with LINK when I plug any ethernet connection in, but after a few hours, I am noticing that both interfaces disappear entirely after I start/stop the pfSense VM. Nothing on the new ports works when I set 6 and 7 to VMBR2 & 3.
I have not yet finished isolating the NIC as being defective, but since the last card also displayed non-working ports, I am now suspecting that the original card is probably fine and there's just a problem with the Server Interface slot.
My question Is: Is it normal for the PCI card interfaces to disappear on a "IP Address" command? The answer on this will of course determine how I proceed on troubleshooting the rest of this situation.
I have been using Proxmox for a few years now, and have been incredibly happy with the results. However, lack of planning on my part for a catastrophic event has me starting over from scratch.
I had originally used IOMMU and Start PCI Device to port an entire 4-port NIC for dedicated use by one of my VMs: pfSense.
This time around, I am unable to get IOMMU to work readily, and while I am having flashbacks on how much of a nightmare it was getting IOMMU setup in the first place, I now sort of want to avoid going the IOMMU route this time around.
I had thought that I would be unable to use that 4-port NIC anymore, as it seems that the card would just disappear from the "ip address" listing and the "lspci" listing as well. I had figured that the card was defective and bought a replacement 2-port 2.5Gb NIC to replace it with. However, the new NIC now suffers the same issue, to the point where I am suspecting that either the card-slot is defective, or something else is going on.
Code:
ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
link/ether 00:21:9b:99:a8:32 brd ff:ff:ff:ff:ff:ff
altname enp1s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:21:9b:99:a8:34 brd ff:ff:ff:ff:ff:ff
altname enp1s0f1
4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr0 state DOWN group default qlen 1000
link/ether 00:21:9b:99:a8:36 brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:21:9b:99:a8:38 brd ff:ff:ff:ff:ff:ff
altname enp2s0f1
6: enp5s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 98:b7:85:01:95:06 brd ff:ff:ff:ff:ff:ff
7: enp5s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 98:b7:85:01:95:07 brd ff:ff:ff:ff:ff:ff
8: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:21:9b:99:a8:32 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.100/24 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::221:9bff:fe99:a832/64 scope link
valid_lft forever preferred_lft forever
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:21:9b:99:a8:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::221:9bff:fe99:a836/64 scope link
valid_lft forever preferred_lft forever
10: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 6e:1c:69:65:29:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 scope global vmbr2
valid_lft forever preferred_lft forever
inet6 fe80::ec24:d2ff:fe42:8d36/64 scope link
valid_lft forever preferred_lft forever
11: vmbr3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:1c:59:40:9a:c4 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.42/24 scope global vmbr3
valid_lft forever preferred_lft forever
inet6 fe80::5cf9:33ff:fee5:776f/64 scope link
valid_lft forever preferred_lft forever
12: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr102i0 state UNKNOWN group default qlen 1000
link/ether d6:02:c5:78:d6:63 brd ff:ff:ff:ff:ff:ff
13: fwbr102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fa:8a:2e:aa:e1:92 brd ff:ff:ff:ff:ff:ff
As you can see, ports 2-5 are my built-in 4-port NIC, and 6 & 7 are the new 2.5Gb NIC ports, and 8-11 are the VMBR ports.
I can confirm that 6 & 7 light-up with LINK when I plug any ethernet connection in, but after a few hours, I am noticing that both interfaces disappear entirely after I start/stop the pfSense VM. Nothing on the new ports works when I set 6 and 7 to VMBR2 & 3.
I have not yet finished isolating the NIC as being defective, but since the last card also displayed non-working ports, I am now suspecting that the original card is probably fine and there's just a problem with the Server Interface slot.
My question Is: Is it normal for the PCI card interfaces to disappear on a "IP Address" command? The answer on this will of course determine how I proceed on troubleshooting the rest of this situation.