I am only using ceph-csi-rbd
Also, I'd be curious if all of us are using Kasten for backups.
Also, I'd be curious if all of us are using Kasten for backups.
Last edited:
rbd trash purge
without failing with:Removing images: 29% complete...failed.
rbd: some expired images could not be removed
Ensure that they are closed/unmapped, do not have snapshots (including trashed snapshots with linked clones), are not in a group and were moved to the trash successfully.
rbd -c /etc/pve/ceph.conf --cluster ceph --pool <pool> ls <pool> |grep snap | xargs -l rbd -c /etc/pve/ceph.conf --cluster ceph --pool <pool> snap purge
ReplicationSource
copyMethod to Direct
doesn't help in the long run.Reported this upstream in the meantime: https://tracker.ceph.com/issues/72713
This is quite tricky to track down, unfortunately; but hey, we've got a bit of an idea now at least.
Just out of curiosity, are there any users here that are only using either one of the Ceph CSI drivers, but not both? (So either onlyceph-csi-rbd
or onlyceph-csi-cephfs
.)
To create temporary backups of my Kubernetes workloads, I set up a Debian Bookworm container within Proxmox, in which I installed Ceph-MGR and added it to the cluster. The version is also 19.2.3, matching the Proxmox Ceph cluster. The manager running in the Debian Bookworm container does not experience these Segfault crashes, allowing me to temporarily backup my workloads.
apt-get install software-properties-common
apt-add-repository 'deb https://download.ceph.com/debian-squid/ bookworm main'
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E84AC2C0460F3994
apt update
apt install ceph-mgr
cd /etc/ceph/
scp source-pve-ip:/etc/ceph/ceph.client.admin* .
scp source-pve-ip:/etc/ceph/ceph.conf .
export $name=cephmgr1
``ceph auth get-or-create mgr.$name mon 'allow profile mgr' osd 'allow *' mds 'allow *'``
mkdir /var/lib/ceph/mgr/ceph-cephmgr1
nano /var/lib/ceph/mgr/ceph-cephmgr1/keyring #<-- paste key
#...
[mgr.cephmgr1]
key = xxxxxxxxxxxxxxxxxxxxxxxxxx
#...
# start mgr daemon
ceph-mgr -i $name
10 * * * * /usr/bin/rbd trash purge k8s-prod && sleep 60 && /usr/bin/systemctl reset-failed && /usr/bin/systemctl restart ceph-mgr.target
15 * * * * /usr/bin/systemctl reset-failed && /usr/bin/systemctl restart ceph-mgr.target
ceph crash archive-all
ceph crash prune 0
We use essential cookies to make this site work, and optional cookies to enhance your experience.