How to remove Ceph safely from 1 node?

Kaboom

Active Member
Mar 5, 2019
119
11
38
52
I have a cluster with say 10 nodes running Ceph/Proxmox last versions. On 1 of the nodes I want to remove Ceph but it must still be available in Proxmox.

How can I do this safely, without deleting important data that is for example in /etc/pve ?

Thanks!
 
What does the node do for ceph in the cluster? Does it hosts monitors, manager, OSDs? Or was it just a plain client?

How much of ceph do you want to remove, so to say, and why? :)
 
It is a node with osd's but I want to take it out of the Ceph cluster and want to remove Ceph on it. Then it will run stand alone. No manager or monitor on it.

Thanks :)
 
It is a node with osd's but I want to take it out of the Ceph cluster and want to remove Ceph on it.

So simply stopping and destroying the OSDs is enough here.
That can be all done over the Webinterface, in Proxmox VE 6 (in 5 I guess too, but currently I not 100% sure).
 
Yes I did this already, but how to remove the Ceph data from this node? I know I can leave it there, but it's almost 2020 and good to clean the house ;)
 
The Ceph data is. or better said was, on the OSDs, so if you destroyed them the disks are free to go? Just zap them and you can re-use them.

What else do you want to delete? :) Some smaller set of ceph packages is dependency of Proxmox VE anyway, and removing the rest would free you only up a few MB, so not sure what you mean..
 
  • Like
Reactions: Kaboom
I just wipe em with gdisk >> x >> z >> y >> y then reboot and it will lose the label. If that's what you're referring too
 
  • Like
Reactions: Kaboom
Will that also remove the ceph_ssd icon in Proxmox ?
Do you mean the "Ceph OSD" entry in the usage column of the Node → Disk panel?
Yes, that would go away. Nowadays, with 6.0 you can also check the "Cleanup Disk" box when destroying an OSD, and the backing disk should get zapped automatically:
Screenshot_2019-11-15 prod1 - Proxmox Virtual Environment.png

Sorry about these questions, but I am afraid I remove something critical on the other nodes (I have bad experiences with that).

No worries, data loss is no joke.
As long as you ensure that you operate on the correct node, and then on the correct devices you really should be good.
 
  • Like
Reactions: Kaboom

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!