mtu 9000 networking

  1. A

    Kuriosität MTU 9000

    Hallo Zusammen, wir haben 8 Proxmox Server 8.1.1 mit Ceph 18.2.2 reef und Kernel 6.5.13-5-pve. Die Änderung der MTU auf 9000 für das Corosync Netzwerk lief problemlos, das Einzige was uns verwundert hat ist das wir alle Ceph OSD einmal neustarten mussten, da bei allen Slow Ops gemeldet wurden...
  2. D

    Slow backups and poor performance of VMs during backups

    I am hoping I provide enough info off the bat to give a good idea of what is going on. But I am a little lost and just have a lot of questions I guess. I will also do my best to update with what has been answered, and link or say what the answer/solution was. The setup: So we have 4 HP DL360p...
  3. S

    Cluster and CEPH traffic on the same network

    Hi! I'm implementing 3-node Proxmox cluster with CEPH storage. Each node has 4Gbit LACP interface (made of 4*1Gbit physical) as a public, and two 10Gbit interfaces for CEPH traffic. Is it a good idea to have Cluster and CEPH traffic both on 10Gbit interface? Or should I use public interface...
  4. V

    vlan aware bridge and mtu 9000

    Hello I am trying to solve an issue that I can't reach my storage when using a vland aware bridge (and mtu 9000). My config looks like this: iface enp94s0f1np1 inet manual auto vmbr1 iface vmbr1 inet manual bridge-ports enp94s0f1np1 bridge-stp off bridge-fd 0...
  5. itNGO

    [SOLVED] Question about MTU

    If I create a BOND with 2 Interfaces and set MTU 9000 in the BOND, do I also need to configure MTU on the Ports/Slaves in the NetworkConfig?
  6. G

    MTU-size, CEPH and public network

    Hi, in a couple of days we'll get our 7 node hardware delivered (4 nics: BOND4LAN , BOND4CEPH). We'll setup OSD's on 5 nodes For the cluster network i understand it is best to use MTU 9000 for faster object replication. The remaining 2 nodes will only connect to the CEPH client to gain access...
  7. A

    Linux Bridge reassemble fragmented packets

    Hi to all, we're experiencing a problem with firewall on a proxmox cluster and after few tests it seems it'a a linux bridge problem The packet capture show that fragmented packets passing through the bridge are reassembled and sent out. This is causing us some problems, even if proxmox cluster...
  8. T

    [SOLVED] Possible MTU misconfiguration detected

    Hello, I changed the MTU on both nodes to 8988, and I got the likely full bandwidth. But after some minutes all breaks. ISCSI won't work and in the syslog at both nodes is something like this: un 28 16:22:03 pangolin corosync[2211]: [KNET ] pmtud: possible MTU misconfiguration detected. kernel...
  9. T

    [SOLVED] Interface vlans not created for containers and VMs after uninstalling ifupdown2

    Dear Proxmoxers! Strange problem happened to one of our cluster nodes tonight while we were trying to increase the MTU on the bond+vmbr interfaces so we can use 9000 on containers. The need for jumbo frames comes from running ceph gateway containers with samba as frontend for video production...
  10. A

    Speed problem with MTU 9000 hypervisor and MTU 1500 VMs

    Hello, We have speed loss with mtu set to 9000 and WMs using a mtu of 1500. Here is the configuration, we have 2 proxmox connected to a switch : vmbr1 is an ovs bridge with mtu 9000. Bond0 is an ovs bond with mtu 9000, each member of the bond has a mtu of 9000 using pre-up, and is link vmbr1...
  11. S

    Probleme mit MTU

    Hallo. Wir haben aktuell Probleme mit der MTU Size. Ich beschreibe hier kurz unsere Umgebung: Wir haben eine NetApp stehen die per NFS an Cisco 3750 Switches hängt. Die NFS Shares werden von einem PVE Cluster (Dell R720 Server) der aus fünf Nodes besteht gemountet. Die NetApp und die Switches...
  12. C

    Message too long, mtu=1500 on OVSInt Port

    I just ran into trouble with enabling multicast on the OVSIntPorts. My cluster network uses 2 Intel 10G ports bonded together, 1 Bridge, 2 IntPorts. On the switch side I added a trunk and enabled jumboframes. After setting (according to the wiki) MTU to 8996 ceph cluster stops working, while...
  13. M

    mtu 9000 on OVS IntPort for Proxmox management

    Hi i would like to set mtu 9000 on my promxox management port, which i configured as a tap port for a ovs switch on which I enabled mtu 900. My /etc/network/interfaces looks like this: auto eth0 iface eth0 inet manual auto eth1 iface eth1 inet manual allow-vmbr0 bond0 iface bond0 inet manual...
  14. H

    Persistent network set mtu 9000

    Hello All, New to Proxmox, have 4.1 up and running, bonded interface for storage, can manually add 'mtu 9000' to /etc/network/interfaces bond section, if-down, if-up the bond and all if well. Where would I add this config parameter so the setting is persistent across reboots? I see in the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!