[SOLVED] Proxmox ceph not responding with MTU 9000

ermanishchawla

Well-Known Member
Mar 23, 2020
332
37
48
38
I have a 4 node ceph cluster with the configuration in each node as follows


My network configuration is as follows

auto bond0
iface bond0 inet static
address 172.19.X.Y/24
bond-slaves eno2 eno3
bond-miimon 100
bond-mode active-backup
bond-primary eno2
mtu 9000


whenever i set mtu as 9000
ceph seems to hang and I am not able to run ceph -s command

what could be the reason

everything works perfectly with MTU 1500
 
Yes configured , I use Cisco Nexus

So am I and I do have no issues pinging with a packetsize of 9000:

Code:
root@iceph01-gh79:~# ping -s 9000 iceph02-gh79-vmbr0-osd.jvm.de
PING iceph02-gh79-vmbr0-osd.jvm.de (10.11.7.26) 9000(9028) bytes of data.
9008 bytes from iceph02-gh79-vmbr0-osd.jvm.de (10.11.7.26): icmp_seq=1 ttl=64 time=0.599 ms
9008 bytes from iceph02-gh79-vmbr0-osd.jvm.de (10.11.7.26): icmp_seq=2 ttl=64 time=0.514 ms

Have you also set the mtu for your physical interfaces? Mine do have mtu 9000 configured, as has the bond :

Code:
auto lo
iface lo inet loopback

iface eno1 inet manual
    mtu 9000

iface eno2 inet manual
    mtu 9000

auto bond0
iface bond0 inet manual
    bond-slaves eno1 eno2
    bond-mode 802.3ad
    bond-miimon 100
    bond-xmit-hash-policy layer2+3
    mtu 9000

auto vmbr0
iface vmbr0 inet manual
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0
    bridge_vlan_aware yes
    bridge_bids 1-200

And just because I am curious…what's the reason to go with active/backup instead of configuring VPC on your Nexus?
 
So am I and I do have no issues pinging with a packetsize of 9000:

Code:
root@iceph01-gh79:~# ping -s 9000 iceph02-gh79-vmbr0-osd.jvm.de
PING iceph02-gh79-vmbr0-osd.jvm.de (10.11.7.26) 9000(9028) bytes of data.
9008 bytes from iceph02-gh79-vmbr0-osd.jvm.de (10.11.7.26): icmp_seq=1 ttl=64 time=0.599 ms
9008 bytes from iceph02-gh79-vmbr0-osd.jvm.de (10.11.7.26): icmp_seq=2 ttl=64 time=0.514 ms

Have you also set the mtu for your physical interfaces? Mine do have mtu 9000 configured, as has the bond :

Code:
auto lo
iface lo inet loopback

iface eno1 inet manual
    mtu 9000

iface eno2 inet manual
    mtu 9000

auto bond0
iface bond0 inet manual
    bond-slaves eno1 eno2
    bond-mode 802.3ad
    bond-miimon 100
    bond-xmit-hash-policy layer2+3
    mtu 9000

auto vmbr0
iface vmbr0 inet manual
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0
    bridge_vlan_aware yes
    bridge_bids 1-200

And just because I am curious…what's the reason to go with active/backup instead of configuring VPC on your Nexus?


Yes it is configured on interfaces also

eno2 and eno3 are set with mtu 9200

I am able to ping switch1 and switch2 with jumbo packet size, so switch configuration is alright,

I am yet to configure VPC doing some proof of concept so configured time-being as active-passive
 
It's not happening

try to ping with with a size of 8972 - you have to substract the header size (28 bytes) in linux. If this doesn't work, or it complains about fragmentation, you probably need to look at the jumbo frames config on your switch.
 
try to ping with with a size of 8972 - you have to substract the header size (28 bytes) in linux. If this doesn't work, or it complains about fragmentation, you probably need to look at the jumbo frames config on your switch.

I am doing ping -s 8000 <Ip>

this is still not happening
 
Yes I am able to ping self

my switch configuration
also have MTU enabled

interface Ethernet1/1

switchport
mtu 9216
 
seems like switch issue only
configured jumbo support system-wide and then it seems to work

Thanks for the help everyone
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!