Hello,
I've got a 4 node PVE cluster with 3 of those nodes sharing a ceph pool. I'm running Ceph Quincy 17.2.7. I did an upgrade from Proxmox 7.4 to 8.1 and followed the steps on the 7 to 8 guide including updating to the correct repos for both PVE and Ceph. While the PVE upgrade seems to have gone fine Ceph itself is now broken in this one node. I already unset the global noout from another note but now I can't even run any ceph commands from the 8.1 node as they all just hang. The overall Ceph status is unhealthy and shows 1/3 degradation on the cluster including the monitors, managers, and OSDs all showing up as down on that host. Could someone help me with how to fix this please?
I've got a 4 node PVE cluster with 3 of those nodes sharing a ceph pool. I'm running Ceph Quincy 17.2.7. I did an upgrade from Proxmox 7.4 to 8.1 and followed the steps on the 7 to 8 guide including updating to the correct repos for both PVE and Ceph. While the PVE upgrade seems to have gone fine Ceph itself is now broken in this one node. I already unset the global noout from another note but now I can't even run any ceph commands from the 8.1 node as they all just hang. The overall Ceph status is unhealthy and shows 1/3 degradation on the cluster including the monitors, managers, and OSDs all showing up as down on that host. Could someone help me with how to fix this please?
Last edited: