I upgraded my CEPH cluster without properly following the mon upgrade so they were no longer on leveldb.
Proxmox and CEPH were updated to latest for current release.
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
The upgrade to Quincy states a recommendation that Mons are using RocksDB...
Hi,
on my test-cluster i upgraded all my nodes from 7.0 to 7.1
CEPH went to pacific 16.2.7 (was 16.2.5?)
Now the monitors and managers won't start.
I had a pool and cephFS configured with MDS.
I've read somewhere that a pool in combination with an old cephFS (i came from PVE6) it could...
Hi everyone,
I have a simple 3 node cluster that has always worked for many years and successfully passed the updates starting from proxmox 4. After updating to version 7 of proxmox and pacific ceph, the system is affected by this issue:
every time I reboot a node for any reason (ie updating to...
This will probably not hit many people but it bit me and should be in the doc, at least until Octopus packages are upgraded to 15.2.14.
The Bug that hit me:
https://tracker.ceph.com/issues/51673
Fixed in 15.2.14:
It was not easy to downgrade to Octopus but it can be done and everything is...
Hi all,
after an upgrade (on Friday night) to Proxmox 7.x and Ceph 16.2, everything seemed to work perfectly.
Sometime early morning today (sunday), the cluster crashed.
17 out of 24 OSDs will no longer start
most of them will do a successful
ceph-bluestore-tool fsck
but some will have an...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.