mtu 9000 networking

  1. MTU-size, CEPH and CSI connectivity

    Hi, in a couple of weeks we'll get our server hardware delivered. We'll setup CEPH storage on 5 nodes, default settings, seperate network cards. For the storage NIC's i understand it is best to use MTU 9000 for higher throughput. Here comes the question: -Should i use the default MTU so i can...
  2. Linux Bridge reassemble fragmented packets

    Hi to all, we're experiencing a problem with firewall on a proxmox cluster and after few tests it seems it'a a linux bridge problem The packet capture show that fragmented packets passing through the bridge are reassembled and sent out. This is causing us some problems, even if proxmox cluster...
  3. [SOLVED] Possible MTU misconfiguration detected

    Hello, I changed the MTU on both nodes to 8988, and I got the likely full bandwidth. But after some minutes all breaks. ISCSI won't work and in the syslog at both nodes is something like this: un 28 16:22:03 pangolin corosync[2211]: [KNET ] pmtud: possible MTU misconfiguration detected. kernel...
  4. [SOLVED] Interface vlans not created for containers and VMs after uninstalling ifupdown2

    Dear Proxmoxers! Strange problem happened to one of our cluster nodes tonight while we were trying to increase the MTU on the bond+vmbr interfaces so we can use 9000 on containers. The need for jumbo frames comes from running ceph gateway containers with samba as frontend for video production...
  5. Speed problem with MTU 9000 hypervisor and MTU 1500 VMs

    Hello, We have speed loss with mtu set to 9000 and WMs using a mtu of 1500. Here is the configuration, we have 2 proxmox connected to a switch : vmbr1 is an ovs bridge with mtu 9000. Bond0 is an ovs bond with mtu 9000, each member of the bond has a mtu of 9000 using pre-up, and is link vmbr1...
  6. Probleme mit MTU

    Hallo. Wir haben aktuell Probleme mit der MTU Size. Ich beschreibe hier kurz unsere Umgebung: Wir haben eine NetApp stehen die per NFS an Cisco 3750 Switches hängt. Die NFS Shares werden von einem PVE Cluster (Dell R720 Server) der aus fünf Nodes besteht gemountet. Die NetApp und die Switches...
  7. Message too long, mtu=1500 on OVSInt Port

    I just ran into trouble with enabling multicast on the OVSIntPorts. My cluster network uses 2 Intel 10G ports bonded together, 1 Bridge, 2 IntPorts. On the switch side I added a trunk and enabled jumboframes. After setting (according to the wiki) MTU to 8996 ceph cluster stops working, while...
  8. mtu 9000 on OVS IntPort for Proxmox management

    Hi i would like to set mtu 9000 on my promxox management port, which i configured as a tap port for a ovs switch on which I enabled mtu 900. My /etc/network/interfaces looks like this: auto eth0 iface eth0 inet manual auto eth1 iface eth1 inet manual allow-vmbr0 bond0 iface bond0 inet manual...
  9. Persistent network set mtu 9000

    Hello All, New to Proxmox, have 4.1 up and running, bonded interface for storage, can manually add 'mtu 9000' to /etc/network/interfaces bond section, if-down, if-up the bond and all if well. Where would I add this config parameter so the setting is persistent across reboots? I see in the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!