D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev.C1) [Realtek RTL8169] - no IP in VM

fakbanior

New Member
Dec 10, 2024
1
0
1
Hello,
First of all I need to say that I picked my brain on how to do this...but no success and I need some help.

Context, I have 3 interfaces ( 1 onboard and 2 NIC's configured as vmbr0/1/2)
3 VM's - ( 2 woking, 1 not )

(1)00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-V [8086:15b8]
Subsystem: ASUSTeK Computer Inc. Ethernet Connection (2) I219-V [1043:8672]
Kernel driver in use: e1000e
Kernel modules: e1000e
--
(2)06:00.0 Ethernet controller [0200]: D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev.C1) [Realtek RTL8169] [1186:4302] (rev 10)
Subsystem: D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev.C1) [Realtek RTL8169] [1186:4302]
Kernel driver in use: r8169
Kernel modules: skge, r8169

--
(3)08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 02)
Subsystem: Realtek Semiconductor Co., Ltd. RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet Controller [10ec:0123]
Kernel driver in use: r8169
Kernel modules: r8169

Interface (1) and (3) ar working as expected without any issues. They are bridged as vmbr0 and vmbr2
But interface (2) - vmbr1 will not work under any condition.....I tried installing Linux Mint 21.3 but i have no network ( if I change the network with vmbr2 it works no issues)
Tried the interface with Ubuntu 20.04 ( works with vmbr2 but does not with vmbr1)

drivers loaded are the followig:

root@proxmox:~# ethtool -i enp0s31f6 ( onboard one )
driver: e1000e
version: 6.8.4-2-pve
firmware-version: 0.8-4
expansion-rom-version:
bus-info: 0000:00:1f.6
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

root@proxmox:~# ethtool -i enp6s0 ( D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev.C1) [Realtek RTL8169] ) the one that does not work
driver: r8169

version: 6.8.4-2-pve
firmware-version:
expansion-rom-version:
bus-info: 0000:06:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

root@proxmox:~# ethtool -i enp8s0 ( Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller )
driver: r8169
version: 6.8.4-2-pve
firmware-version:
expansion-rom-version:
bus-info: 0000:08:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

ip link show
enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether 70:4d:7b:89:99:4f brd ff:ff:ff:ff:ff:ff
vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 70:4d:7b:89:99:4f brd ff:ff:ff:ff:ff:ff

enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UP mode DEFAULT group default qlen 1000
link/ether 1c:7e:e5:23:f5:9e brd ff:ff:ff:ff:ff:ff
vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 1c:7e:e5:23:f5:9e brd ff:ff:ff:ff:ff:ff


enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UP mode DEFAULT group default qlen 1000
link/ether 00:e0:4c:68:1d:b0 brd ff:ff:ff:ff:ff:ff
vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:e0:4c:68:1d:b0 brd ff:ff:ff:ff:ff:ff


Drivers are loaded

root@proxmox:~# /sbin/modprobe skge
root@proxmox:~# /sbin/modprobe r8169
root@proxmox:~# /sbin/modprobe e1000e
root@proxmox:~# /sbin/lsmod | grep r8169
r8169 110592 0
root@proxmox:~# /sbin/lsmod | grep skge
skge 61440 0
root@proxmox:~# /sbin/lsmod | grep e1000e
e1000e 344064 0

Tried to blacklist "skge" but no success.
echo blacklist skge >> /etc/modprobe.d/blacklist-skge.conf
systemctl restart networking

All have IP's and I can ping vmbr1 without any issue ( i masked unecesary info)

vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet XXXXXXXXXXXXX7 netmask XXXXXXXXXXXXX broadcast 0.0.0.0
inet6 fe80::724d:7bff:fe89:994f prefixlen 64 scopeid 0x20<link>
ether 70:4d:7b:89:99:4f txqueuelen 1000 (Ethernet)
RX packets 24302 bytes 3957380 (3.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 21622 bytes 10716000 (10.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vmbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet XXXXXXXXXXXXX8 netmask XXXXXXXXXXXXX broadcast 0.0.0.0
inet6 fe80::1e7e:e5ff:fe23:f59e prefixlen 64 scopeid 0x20<link>
ether 1c:7e:e5:23:f5:9e txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9 bytes 1286 (1.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vmbr2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet XXXXXXXXXXXXX9 netmask XXXXXXXXXXXXX broadcast 0.0.0.0
inet6 fe80::2e0:4cff:fe68:1db0 prefixlen 64 scopeid 0x20<link>
ether 00:e0:4c:68:1d:b0 txqueuelen 1000 (Ethernet)
RX packets 7185 bytes 967255 (944.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 194 bytes 9076 (8.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ping XXXXXXXXXXXXX8
PING XXXXXXXXXXXXX8 (XXXXXXXXXXXXX8) 56(84) bytes of data.
64 bytes from XXXXXXXXXXXXX8: icmp_seq=8 ttl=64 time=0.023 ms
64 bytes from XXXXXXXXXXXXX8: icmp_seq=2 ttl=64 time=0.025 ms
64 bytes from XXXXXXXXXXXXX8: icmp_seq=3 ttl=64 time=0.027 ms
64 bytes from XXXXXXXXXXXXX8: icmp_seq=4 ttl=64 time=0.027 ms
64 bytes from XXXXXXXXXXXXX8: icmp_seq=5 ttl=64 time=0.024 ms
64 bytes from XXXXXXXXXXXXX8: icmp_seq=6 ttl=64 time=0.021 ms
64 bytes from XXXXXXXXXXXXX8: icmp_seq=7 ttl=64 time=0.028 ms
64 bytes from XXXXXXXXXXXXX8: icmp_seq=8 ttl=64 time=0.023 ms
^C
--- XXXXXXXXXXXXX8 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7176ms
rtt min/avg/max/mdev = 0.021/0.024/0.028/0.002 ms

After searching I saw a older post that this issue was fixed with kernel 6.x but with me this does not seem to be the case
Kernel loaded 6.8.4-2.
If you have type of ideeas kindly sent them my way.
If any other logs will be needed from my side, do let me know.

Br
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!