Ceph node on new HW

stefanodm

New Member
May 20, 2023
6
1
3
Hi everyone, maybe this question has already had answers, but I can't find anything definitive, so I'll try to ask you in the community.
We have a Proxmox cluster with 7 nodes of which 3 nodes have the sole function of Ceph node. (Without any Vm or CT).
One of these 3 nodes has different hardware than the other 2 but the OSD hard drives are the same and I would just replace the hardware to make them exactly the same. My idea is to simply replace Mainboard, CPU, RAM, HBA Controller and Network Cards, leave the same hard drives (Boot and OSD).
Reconfiguring offline the network, then restoring exactly the original network configuration with the new cards, and booting in my opinion should have no problems. Proxmox should configure the new hardware and start as if it was rebooted, but I'm probably missing something expecially form the Ceph point of view..Can you brief me about it?


proxmox-ve: 7.4-1
 
Hi,
If you want to do a very crear soluzion, you can:

1) Tourn that node off
2) Delete that node from the Proxmox cluster
3) Change your hardware (Mainboard, CPU, RAM, HBA Controller and Network Cards, leave the same hard drives)
4) Re-Install proxmox (you could restore exacly the /etc/network/interface) but pay attention for the NIC name.
5) Join that node in to the Proxmox cluster
5) Install Ceph
6) Create OSD

I think that is the clenest approch,

MM
 
Last edited:
Hi,
If you want to do a very crear soluzion, you can:

1) Tourn that node off
2) Delete that node from the Proxmox cluster
3) Change your hardware (Mainboard, CPU, RAM, HBA Controller and Network Cards, leave the same hard drives)
4) Re-Install proxmox (you could restore exacly the /etc/network/interface) but pay attention for the NIC name.
5) Join that node in to the Proxmox cluster
5) Install Ceph
6) Create OSD

I think that is the clenest approch,
That's meaning that all osd's in that node must be destroyed before deleting that node from cluster if i'm not mistaken, itsn't?
 
That's meaning that all osd's in that node must be destroyed before deleting that node from cluster if i'm not mistaken, itsn't?
I integrate the 2) point
  • Delete that node from the Proxmox cluster
  • After that, you will still see the OSDs (in down state) in the Ceph configuration
  • Stop and destroy all OSD layer, ONE by ONE
    • ceph osd rm osd.X
    • ceph auth del osd.X
    • ceph osd crush remove osd.X
    • ceph osd crush rm <node_name>
    • ceph mon remove <node_name>
    • Check if there is still the monitor IP in /etc/pve/ceph.conf
My tip is try this proces in pre-production enviroment in orther to be more confident and than do it in production.

MM
 
Last edited:
I integrate the 2) point
  • Delete that node from the Proxmox cluster
  • After that, you will still see the OSDs (in down state) in the Ceph configuration
  • Stop and destroy all OSD layer, ONE by ONE
    • ceph osd rm osd.X
    • ceph auth del osd.X
    • ceph osd crush remove osd.X
    • ceph osd crush rm <node_name>
    • ceph mon remove <node_name>
    • Check if there is still the monitor IP in /etc/pve/ceph.conf
My tip is try this proces in pre-production enviroment in orther to be more confident and than do it in production.
Thank you mate. Precious.
 
  • Like
Reactions: supermicro_server

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!