If you get a third node it also speeds up that considerably. Also for small clusters increasing the PG count helps alot on the same principle as more nodes = more disks that can participate in satisfying the request..
Still find it weird that the mellanoxes can do an MTU of 1592 and up just fine tho, have run MTU 9000 on all my NICs since I first started with proxmox and now with Proxmox 7, the ifupdown2 package gets corrupted on 2 different nodes, and all nodes exibit MTU issues with e1000e driver and NIC!
And just like that!
root@pve22:~# ip link set dev eno1 mtu 1472
root@pve22:~# ip link set dev vmbr0 mtu 1472
root@pve21:~# ip link set dev eno1 mtu 1472
root@pve21:~# ip link set dev vmbr0 mtu 1472
root@pve21:~# scp 192.168.1.22:/etc/ceph/ceph.conf ceph.conf
ceph.conf 100% 2989 4.0MB/s...
Found something interesting in rgds to the MTU thank you spirit!
root@pve22:~# cat /var/log/syslog | grep PMTUD
Aug 29 17:09:28 pve22 corosync[2944]: [KNET ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 1461
Aug 29 17:16:17 pve22 corosync[2944]: [KNET ] pmtud: PMTUD link...
root@pve21:~# ping -Mdo -s 1560 192.168.1.22
PING 192.168.1.22 (192.168.1.22) 1560(1588) bytes of data.
^C
--- 192.168.1.22 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 9215ms
Thank you @spirit - seems this doesnt work on any of my eno1's but works just fine on...
ip addr | grep mtu
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1592 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
3: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1592 qdisc mq master...
Thinking this might be a driver issue or some other packages that are corrupted as I have the same behaviour on all three nodes even tho I replaced the switch as well..
OK, slowly slicing away at my forehead as I keep banging my head against the wall, I actually managed to get a ceph quroum by switching from the network card that is for OSDs and MONs to the backend network interface....
Now checking firewalls even tho I am able to telnet into both v1 and v2...
And now I am unable to get the ceph-mons, possibly related to this network issue after Proxmox 6.4 -> 7 upgrade or the fact that all the LVM IDs seem to have changed.
Any help is appreciated as this upgrade has been a truly horrible experience this far with no access to my data..
The monitor...
And now I am unable to get the ceph-mons, possibly related to this network issue or the fact that all the LVM IDs seem to have changed.
Any help is appreciated as this upgrade has been a truly horrible experience this far with no access to my data..
Yes, resolved by booting into Debian 11 LiveCD, downloading the ifupdown2 package to the proxmox bootdisk, booting into proxmox and overwriting the broken ifupdown2 install.
Hints where given at systemctl status networking
Since upgrading to 7.0 I lost all connectivity on 2 nodes (not the third tho even tho they were upgraded all at the same time!).
Troubleshooting steps:
* hwaddress was added to the bridges
* tried auto/hotplug in intrefaces as well
* i CAN bring up the interfaces with ip link set en0 up but...
Hi all,
All very valid points which is why I ran the tests on the PVE boxes themselves through a Mikrotik 10gbit switch. Network is not breaking a sweat, and not CPU either. I have ofcourse tested the network to be at real 10gbit speed beforehand with iperf and RAM disks transfers.
For network...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.