Generally I set the noout flag to the cluster before rebooting a node to prevent the cluster from needing to do a lot of work on the node coming back online. The thing that is the most strange is that the daemons crash while the cluster still has the noout flag set and the node is back online...
I too am seeing this issue in my cluster with most of my nodes upon reboot and I too can't find anything related in the logs. Some of them don't have daemons crashing on the update. Not sure what the cause is on this one since everything works fine after the reboot.
I am referring to some fixes that seem to resolve issues with live migration between machines which both run Intel CPUs. I have machines running Intel E5-2660 v2 CPUs and some other machines in my stack running Intel Xeon Silver 4110 CPUs and when live migrating between these hosts, VMs running...
I run several deployments of Proxmox and in some of my production clusters, I would prefer not to update to this kernel version. Is there any news on the older kernel getting these fixes backported to it or will the older kernel be left with this bug and focus will eventually shift to 5.19 for...
Thanks for pointing me in the right direction. Looks like I've got a bit of work to do. I got the new pool online but since I don't have any rules for the other one, I've got to get to work learning a bit more about CEPH. This is resolved.
I have a three node cluster with an existing CEPH cluster running 14.2.9. Each host has 6 OSDs as part of the cluster and this works great. I have added 3x1TB PCIe EMLC SSDs to the cluster. One of these is in each node. My goal is to create a separate pool with only 3 OSD where I can store VM OS...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.