Could anyone explain why do corosync (KNET) choose best link with the highest priority instead of the lowest one (as written in PVE wiki)?
Very confused with corosync3 indeed...
Another observation is that in my setups only nodes with no swap (zfs as root and NFS share as datastore) and vm.swappiness=0 in sysctl.conf are affected
I do remember the unresolved issue with PVE 5.x where swap has been used even with vm.swappiness=0 by pve process. Couldn't this be the case...
Could the problem be related to jumbo frames and/or dual ring configuration?
I'm facing the same issue - corosync randomly hangs on different nodes.
I've two rings 10Gbe + 1Gbe with mtu = 9000 on both nets
Don't know how this could be related but following was observed during the boot
[Wed Sep 11 04:37:27 2019] ACPI: Using IOAPIC for interrupt routing
[Wed Sep 11 04:37:27 2019] HEST: Table parsing has been initialized.
[Wed Sep 11 04:37:27 2019] PCI: Using host bridge windows from ACPI; if...
I've noticed that after installing PVE 6.x ckuster with 10Gb net for intercluster and storage (NFS) communications cluster nodes randomly hangs - still available through ethernet (1Gbe) nework but NOT accesible via main 10Gbe, so neither cluster nor storage are availible
Yesterday it happened...
I agree with this assumption. The one should at least be warn before and upgrade.
I'm facing the same issue with 50+ OSDs and have no idea how to sort it out
I don't have another cluster to play with and found not much info how correctly destroy all OSDs on single node, wipe all disks (as well...
After a successful upgrade from PVE 5 to PVE 6 with Ceph the warning message "Legacy BlueStore stats reporting detected on ..." appears on Ceph monitoring panel
Have I missed something during an upgrade or it's an expected behavior?
Thanks in advance
root@pve2:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or...