Removing Ceph Completely

dthompson

Well-Known Member
Nov 23, 2011
146
15
58
Canada
www.digitaltransitions.ca
I just upgraded from 5.4 to 6.1 and all went well except for CEPH. I'm OK with it as I was just using it to play around with it.
What I would like to know is the best way to remove the entire existing cluster and then purge anything to do with CEPH and then start it up again from scratch.

I removed all the monitors but the last one, as I cannot remove it. I have removed all the existing data on the CEPH drives as well. I just can't seem to get it to be clean again. Every time I go to set it up via the wizard my existing node is still showing up.

I am looking to remove it all, but am missing something. If anyone can give me pointers and directions, that would be great!
 
Hi,

"pveceph purge" will remove all setting.
This must be called on all nodes.
 
  • Like
Reactions: gianni4592
Any updates on this? I'm totally at loss of how to move forward with a ceph install from scratch (and/or purging everything ceph and starting again). I get the same error message as OP when trying to do anything after pveceph init (pveceph createmon or pveceph purge).
 
Hi,

"pveceph purge" will remove all setting.
This must be called on all nodes.
Nah, pveceph purge just wipes the ceph cluster onall servers if run from one. Which is kinda epic since the documentation does not state this. It even does a pisspoor job at remove ceph. Leaving you with a broken installation where once again you have to Proxmox-Tinker your way to a solution. Because even the dev's give wrong info.

When are you coming to restore my 30T cluster for me that was wiped from 6 nodes by ruinning pveceph purge from server #6 because it did the proxmox-way when added to the existing 5 node cluster, failed that... naturally it did, why would it work out of the box? Subsequently trying to wipe the single node pr. documentation wiped the entire cluster...

Madness!
 
Last edited:
Hi all. I found this works well to completely remove ceph and config

Code:
systemctl stop ceph-mon.target
systemctl stop ceph-mgr.target
systemctl stop ceph-mds.target
systemctl stop ceph-osd.target
rm -rf /etc/systemd/system/ceph*
killall -9 ceph-mon ceph-mgr ceph-mds
rm -rf /var/lib/ceph/mon/  /var/lib/ceph/mgr/  /var/lib/ceph/mds/
pveceph purge
apt purge ceph-mon ceph-osd ceph-mgr ceph-mds
apt purge ceph-base ceph-mgr-modules-core
rm -rf /etc/ceph/*
rm -rf /etc/pve/ceph.conf
rm -rf /etc/pve/priv/ceph.*
 
Hi all. I found this works well to completely remove ceph and config

Code:
systemctl stop ceph-mon.target
systemctl stop ceph-mgr.target
systemctl stop ceph-mds.target
systemctl stop ceph-osd.target
rm -rf /etc/systemd/system/ceph*
killall -9 ceph-mon ceph-mgr ceph-mds
rm -rf /var/lib/ceph/mon/  /var/lib/ceph/mgr/  /var/lib/ceph/mds/
pveceph purge
apt purge ceph-mon ceph-osd ceph-mgr ceph-mds
apt purge ceph-base ceph-mgr-modules-core
rm -rf /etc/ceph/*
rm -rf /etc/pve/ceph.conf
rm -rf /etc/pve/priv/ceph.*
Big help here! I'm having problems with "Multiple IPs for ceph public network detected on host1: use 'mon-address' to specify one of them. (500)" when trying to install ceph (I took out the IPs) basically I have bonds for public and cluster. Ran your commands and removed ceph. Still not resolved but at least I now know all the steps to purge it.
 
  • Like
Reactions: techtomic
I've been trying for hours to get this gone from my setup. This should help others.

My proxmox nodes keep trying to load a cephfs pool, even following the above steps.

Logs

Jan 08 02:51:30 proxmox-node systemd[1]: Failed to mount /mnt/pve/cephfs.
Jan 08 02:51:30 proxmox-node pvestatd[978]: mount error: Job failed. See "journalctl -xe" for details.
Jan 08 02:51:40 proxmox-node pvestatd[978]: Supplied ceph config doesn't exist, /etc/pve/ceph.conf
Jan 08 02:51:40 proxmox-node systemd[1]: Reloading.
Jan 08 02:51:40 proxmox-node systemd[1]: Mounting /mnt/pve/cephfs...
Jan 08 02:51:40 proxmox-node mount[6065]: global_init: unable to open config file from search list /etc/pve/ceph.conf
Jan 08 02:51:40 proxmox-node mount[6068]: global_init: unable to open config file from search list /etc/pve/ceph.conf
Jan 08 02:51:40 proxmox-node mount[6064]: unable to determine mon addresses
Jan 08 02:51:40 proxmox-node systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=234/n/a
Jan 08 02:51:40 proxmox-node systemd[1]: mnt-pve-cephfs.mount: Failed with result 'exit-code'.


Find this file

/etc/pve/storage.cfg

delete the "rbd" and "cephfs" entries.

Code:
alorelei@proxmox-node:/etc/pve$ sudo cat storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

rbd: ceph-pool
        content rootdir,images
        krbd 0
        pool ceph-pool

cephfs: cephfs
        path /mnt/pve/cephfs
        content vztmpl,backup,iso
        fs-name cephfs
 
Hi all. I found this works well to completely remove ceph and config

Code:
systemctl stop ceph-mon.target
systemctl stop ceph-mgr.target
systemctl stop ceph-mds.target
systemctl stop ceph-osd.target
rm -rf /etc/systemd/system/ceph*
killall -9 ceph-mon ceph-mgr ceph-mds
rm -rf /var/lib/ceph/mon/  /var/lib/ceph/mgr/  /var/lib/ceph/mds/
pveceph purge
apt purge ceph-mon ceph-osd ceph-mgr ceph-mds
apt purge ceph-base ceph-mgr-modules-core
rm -rf /etc/ceph/*
rm -rf /etc/pve/ceph.conf
rm -rf /etc/pve/priv/ceph.*

copy-paste-enter-forget:

Bash:
systemctl stop ceph-mon.target
systemctl stop ceph-mgr.target
systemctl stop ceph-mds.target
systemctl stop ceph-osd.target
rm -rf /etc/systemd/system/ceph*
killall -9 ceph-mon ceph-mgr ceph-mds
rm -rf /var/lib/ceph/mon/  /var/lib/ceph/mgr/  /var/lib/ceph/mds/
pveceph purge
apt-get purge ceph-mon ceph-osd ceph-mgr ceph-mds -y
apt-get purge ceph-base ceph-mgr-modules-core -y
rm -rf /etc/ceph/* /etc/pve/ceph.conf /etc/pve/priv/ceph.*
apt-get autoremove -y

but then you still have the lvm config present in your system; only lvremove can use bash globbing/wildcards, vgremove can use bash completion though and pvremove you should select the correct disk on your own, so this is a more pay attention action to remove:
Bash:
lvremove -y /dev/ceph*
vgremove -y ceph-<press-tab-for-bash-completion>
pvremove /dev/nvme1n1
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!