on my test-cluster i upgraded all my nodes from 7.0 to 7.1
CEPH went to pacific 16.2.7 (was 16.2.5?)
Now the monitors and managers won't start.
I had a pool and cephFS configured with MDS.
I've read somewhere that a pool in combination with an old cephFS (i came from PVE6) it could...
I have a simple 3 node cluster that has always worked for many years and successfully passed the updates starting from proxmox 4. After updating to version 7 of proxmox and pacific ceph, the system is affected by this issue:
every time I reboot a node for any reason (ie updating to...
This will probably not hit many people but it bit me and should be in the doc, at least until Octopus packages are upgraded to 15.2.14.
The Bug that hit me:
Fixed in 15.2.14:
It was not easy to downgrade to Octopus but it can be done and everything is...
after an upgrade (on Friday night) to Proxmox 7.x and Ceph 16.2, everything seemed to work perfectly.
Sometime early morning today (sunday), the cluster crashed.
17 out of 24 OSDs will no longer start
most of them will do a successful
but some will have an...