I've noticed that after installing PVE 6.x ckuster with 10Gb net for intercluster and storage (NFS) communications cluster nodes randomly hangs - still available through ethernet (1Gbe) nework but NOT accesible via main 10Gbe, so neither cluster nor storage are availible
Yesterday it happened...
I agree with this assumption. The one should at least be warn before and upgrade.
I'm facing the same issue with 50+ OSDs and have no idea how to sort it out
I don't have another cluster to play with and found not much info how correctly destroy all OSDs on single node, wipe all disks (as well...
After a successful upgrade from PVE 5 to PVE 6 with Ceph the warning message "Legacy BlueStore stats reporting detected on ..." appears on Ceph monitoring panel
Have I missed something during an upgrade or it's an expected behavior?
Thanks in advance
Mine configs:
root@pve2:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or...
I'm facing almost the same issue with couple of setups after an upgrade to 5.4. Could you show network config and lspci output. Probably we could find out something in common
This morning I restarted corosync on all the nodes again. Cluster was forking for couple of minutes and than hanged
May 15 09:40:10 pve1 systemd[1]: Starting Corosync Cluster Engine...
May 15 09:40:10 pve1 corosync[24728]: [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready...
On another cluster I'm facing different issue but again after an upgrade to 5.4
Could you please take a look into: https://forum.proxmox.com/threads/proxmox-cluster-broke-at-upgrade.54182/#post-250102
I'm fully confident that my network switches are configured inline with PVE docs IGMP snooping...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.