Realtek 8125B jumbo frame "issue"

tlex

Member
Mar 9, 2021
103
14
23
43
So I have this "issue" I can't figure.
I'm trying to enable jumbo frame on my "small" homelab.

My proxmox server has a 4 port Realtek 8125B 2.5GBps card. My lan port (enp8s0) is connected to my main switch Tplink TL-SG3210XHP-M2.
Jumbo frame is set at 9000 everywhere and the network cable between my proxmox server and my mainswitch is 2 feet cat6 shielded.
For some reason that I don't understand, any mtu larger than 1500 won't pass.
Is it a driver issue ? Any idea to help me figure it out ?

Code:
lspci | grep Realtek
06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
09:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)

Code:
cat /etc/network/interfaces


auto lo
iface lo inet loopback


iface enp7s0 inet manual
        mtu 9000


auto enp8s0
iface enp8s0 inet manual
        mtu 9000
#LAN


auto enp9s0
iface enp9s0 inet manual
        mtu 9000
#WAN


auto enx000ec6899483
iface enx000ec6899483 inet manual
        mtu 9000


iface enp6s0 inet manual
        mtu 9000


auto vmbr0
iface vmbr0 inet static
        address xxx.xxx.xxx.xxx/24
        gateway xxx.xxx.xxx.xxx
        bridge-ports enp8s0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 1-4094
        mtu 9000
#LAN


auto vmbr1
iface vmbr1 inet manual
        bridge-ports enp9s0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000
        bridge-ageing 0
#WAN


auto vmbr2
iface vmbr2 inet manual
        bridge-ports enp6s0
        bridge-stp off
        bridge-fd 0
        mtu 9000


auto vmbr3
iface vmbr3 inet static
        address xxx.xxx.xxx.xxx/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        mtu 9000
#TEST

Code:
ifconfig| grep -i MTU
enp6s0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 9000
enp8s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
enp9s0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 9000
fwbr1005i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
fwbr104i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
fwln1005i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
fwln104i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
fwpr1005p0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
fwpr104p0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
tap1001i0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 9000
tap1004i0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 9000
tap1005i0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 9000
veth100i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth101i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth102i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth104i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth105i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth106i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth108i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth109i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth111i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth112i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth113i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
veth115i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
vmbr1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 9000
vmbr2: flags=4099<UP,BROADCAST,MULTICAST>  mtu 9000
vmbr3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000

Code:
ethtool enp8s0
Settings for enp8s0:
        Supported ports: [ TP    MII ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
                                2500baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
                                2500baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Link partner advertised link modes:  100baseT/Half 100baseT/Full
                                             1000baseT/Full
                                             2500baseT/Full
        Link partner advertised pause frame use: No
        Link partner advertised auto-negotiation: Yes
        Link partner advertised FEC modes: Not reported
        Speed: 2500Mb/s
        Duplex: Full
        Auto-negotiation: on
        master-slave cfg: preferred slave
        master-slave status: slave
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: external
        MDI-X: Unknown
        Supports Wake-on: pumbg
        Wake-on: d
        Link detected: yes


Code:
ethtool -i enp8s0 | grep driver | awk '{print $2}'
r8169

Code:
ip a s vmbr0
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 1c:fd:08:74:76:7a brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.xxx.xxx/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::1efd:8ff:fe74:767a/64 scope link
       valid_lft forever preferred_lft forever
root@pve:~# ip a s enp8s0
4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 1c:fd:08:74:76:7a brd ff:ff:ff:ff:ff:ff

Code:
pveversion
pve-manager/8.2.2/9355359cd7afbae4 (running kernel: 6.8.4-3-pve)

Code:
ping -s 1472 xxx.xxx.xxx.xxx (tplink switch)
PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) 1472(1500) bytes of data.
1480 bytes from xxx.xxx.xxx.xxx: icmp_seq=1 ttl=64 time=0.761 ms



Code:
ping -s 1473 xxx.xxx.xxx.xxx  (tplink switch)
PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) 1473(1501) bytes of data.
^C
--- xxx.xxx.xxx.xxx ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4092ms


mtu.jpg
 
take two PCs, direct connect them and test without switch middleman.
boot live ISO into Ubuntu, Debian or any other main-stream OS. Set the networking up and test.

Any idea to help me figure it out ?
reduce the number of components, change variables (one at a time) and try to pin down the culprit. Your switch seems to be "sophisticated" - can you ping from it out? If so, can you jumbo ping another host?

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
take two PCs, direct connect them and test without switch middleman.
boot live ISO into Ubuntu, Debian or any other main-stream OS. Set the networking up and test.


reduce the number of components, change variables (one at a time) and try to pin down the culprit. Your switch seems to be "sophisticated" - can you ping from it out? If so, can you jumbo ping another host?

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Well, in the config I tested it’s between the proxmox host and the switch nothing else in between.

pc to pc = ok
pc to switch = ok
Proxmox host to switch (large mtu) = no go
pc to switch to proxmox (large mtu) = no go

I can ping from the switch but the cli doesn't seem to allow changing packet sizes.

That's why I posted here testing between proxmox and the switch ;)
 
And what is the result of a directly connected PC<>Proxmox?

Get rid of the switch and VLANs, and test the basic config.

Keep in mind that PVE is Debian with Ubuntu Kernel. It's not a dark box closed OS. Many people are using jumbo frames successfully. Your issue may be with the NIC, the Switch, or the firmware of either one or a combination of any of them.

To find out if it's the driver/kernel issue you need to simplify your environment to the bare minimum and exclude everything else.
There is a remote chance that someone has your exact setup and ran into the same issue. But until you hear from that person you can try to figure it out on your own.

The other option, as mentioned, try with a different Kernel.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!