Hello,
I am experiencing an issue with configuring the MTU for the interface used by Ceph. When I set the MTU to 9000 on both the server interface and the physical switch, I can successfully ping with a maximum packet size of 8958. However, I am unable to access resources in the Ceph cluster...
Hello,
We recently built a 3-node PVE hyper-converged cluster w/ ceph and I was wondering about the following:
For MTU size on interfaces, does it matter where that is applied? I believe I've found that on linux bridge it is unnecessary as it inherits it from the bond, but what about the...
Hello all,
we operate a proxmox cluster with 3 nodes. The network settings look like this on all 3 nodes:
As you see the bridge ist VLAN aware. We need this, because some of our machines need access to more than 32 VLANs, but we cannot add more than 32 NICs.
So the VMs then have 1 interface...
I got an issue since a while that the network connection speed drops on my Proxmox. (I check it with Iperf3)
I have an 2.5 Gbit connection but sometimes the speed drops to 1 Gbit or sometimes even 100 Mbit. The clue is. I get this resolved 99% of the time when I change the MTU in the GUI from...
Lately without me doing any configuration change, except for updating the Proxmox host (7.4-17 now) and the LXC Guest Samba server,
I ran into a huge headache with connecting from my Windows 11 SMB client to the above Samba server.
Basically, whenever I tried to open the drive or browse the...
Hi!
I apologize to open up another lot-discussed-topic, at the moment everything looks to me working well but still i would like to get confirmation if my thinking is good and specific configuration is good.
My effort is
computer has four physical interfaces and two of them are 10g and...
In my SysLogs, I'm seeing the following over and over, anywhere form every few minutes to every hour or so.
Apr 22 13:54:55 server9 corosync[3275313]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Apr 22 13:54:55 server9 corosync[3275313]: [KNET ] host: host: 1 has no active links...
We have an overlay network (configured as described below) which worked fine on PVE6.4 but after migrating the containers to a PVE 7.2 node have noticed some odd behavior, with packets above a certain size being discarded by the vxlan interface, but only when sent from a container, VMs continue...
I am hoping I provide enough info off the bat to give a good idea of what is going on. But I am a little lost and just have a lot of questions I guess. I will also do my best to update with what has been answered, and link or say what the answer/solution was.
The setup:
So we have 4 HP DL360p...
Hey there,
after testing the behavior a bit, there are probably two bugs I noticed.
One "bug" is resolved with an ifreload or with ifup -a.
1. A ovs_bonds Interface have the mtu of 1500 - despite setting it to 9000 (ovs_mtu 9000).
netstat -i (only bond0 and vmbr9)
Kernel Interface table
Iface...
Throwing this solution up in case someone else runs into the issue. Scroll down to the bottom for the key Take Aways and dev recommendations.
Environment
3x PVE 7.0-11 nodes clustered together
Every node has a ZFS pool with a GlusterFS brick on it
Glusterd version 9.2
Gluster is configured in a...
I have a strange issue, that the mtu size of vlans are not set.
My /etc/network/interfaces:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please...
Hi, I am running Proxmox on a HP Thinclient T630. The internet connection of the server is 100down/50up but over both WG and OVPN I get maximum 23-25mbps either way. I tried them in both a VM and a LXC with no change. I played with the MTU from 1420/1500 all the way to 1200 - again no...
Consider the following example config in /etc/network/interfaces:
auto lo
iface lo inet loopback
auto mgmt0
iface mgmt0
auto mgmt1
iface mgmt1
auto north0
iface north0
mtu 9000
auto north1
iface north1
mtu 9000
auto bond0
iface bond0
bond-slaves north0 north1
bond-miimon 100...
Hi.
I have a requirement that all traffic leaving proxmox is using a certain MTU lower than 1500, including traffic from VMs
What is the way to configure this? Should we just set the MTU on the public interface and the bridge?
To ensure all traffic is using a certain MTU, must the MTU be...
Hi,
Right now I'm rebuilding my network because I switched the servers from Gbit to 10G but I'm not sure how to optimize the MTU.
1.) Is it useful to switch from 1500 MTU to 9000 MTU jumboframes? I've heard that this would reduce the number of packets and because of that would increase the...
Hi,
I'm trying to setup a new cluster with three nodes throw VPN to make a small HA.
I'm usinc tinc for the vpn and ovs to bridge the node with gre.
https://dryusdan.space/installer-un-cluster-proxmox-ceph-tinc-openvswitch/
It's working quiet well but i'm getting some issue with ssh and...
Hello,
I changed the MTU on both nodes to 8988, and I got the likely full bandwidth. But after some minutes all breaks. ISCSI won't work and in the syslog at both nodes is something like this:
un 28 16:22:03 pangolin corosync[2211]: [KNET ] pmtud: possible MTU misconfiguration detected. kernel...
Hello PVE-Community,
I recently setup a three node cluster and I have this weird issue where the shutdown of a VM or CT changes the MTU of the bridge that is used for communication between VM/CTs and also for the cluster link. The MTU is changed to whatever is configured at the primary...
Hello,
We have speed loss with mtu set to 9000 and WMs using a mtu of 1500.
Here is the configuration, we have 2 proxmox connected to a switch :
vmbr1 is an ovs bridge with mtu 9000.
Bond0 is an ovs bond with mtu 9000, each member of the bond has a mtu of 9000 using pre-up, and is link vmbr1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.