pacific

  1. S

    [SOLVED] Problem: Upgrading CEPH Pacific to Quincy resulted in CEPH Storage pool to stop functioning.

    I upgraded my CEPH cluster without properly following the mon upgrade so they were no longer on leveldb. Proxmox and CEPH were updated to latest for current release. https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy The upgrade to Quincy states a recommendation that Mons are using RocksDB...
  2. G

    [SOLVED] CEPH MON fail after upgrade

    Hi, on my test-cluster i upgraded all my nodes from 7.0 to 7.1 CEPH went to pacific 16.2.7 (was 16.2.5?) Now the monitors and managers won't start. I had a pool and cephFS configured with MDS. I've read somewhere that a pool in combination with an old cephFS (i came from PVE6) it could...
  3. S

    [SOLVED] Ceph Pacific. RADOS. Objects are not deleted, but only orphaned

    RADOS ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable) added file to bucket radosgw-admin --bucket=support-files bucket radoslist | wc -l 96 ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 44 TiB 44 TiB 4.7 GiB 4.7 GiB...
  4. S

    Proxmox ceph pacific cluster becomes unstable after rebooting a node

    Hi everyone, I have a simple 3 node cluster that has always worked for many years and successfully passed the updates starting from proxmox 4. After updating to version 7 of proxmox and pacific ceph, the system is affected by this issue: every time I reboot a node for any reason (ie updating to...
  5. J

    [SOLVED] [Warning] Ceph Upgrade Octopus 15.2.13 to Pacific 16.2.x

    This will probably not hit many people but it bit me and should be in the doc, at least until Octopus packages are upgraded to 15.2.14. The Bug that hit me: https://tracker.ceph.com/issues/51673 Fixed in 15.2.14: It was not easy to downgrade to Octopus but it can be done and everything is...
  6. Waschbüsch

    ceph 16.2 pacific cluster crash

    Hi all, after an upgrade (on Friday night) to Proxmox 7.x and Ceph 16.2, everything seemed to work perfectly. Sometime early morning today (sunday), the cluster crashed. 17 out of 24 OSDs will no longer start most of them will do a successful ceph-bluestore-tool fsck but some will have an...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!