Hi all,
I have a node in one of my clusters that the disk is completely full and I can't find out what's taking up the space, so I'd like to re-format and reinstall Proxmox and rejoin it to my cluster.
It also has 8 OSDs which are part of the cluster's Ceph storage.
What is the best way to remove this node and get it back into the Proxmox cluster and Ceph cluster as well? Any command I do from the GUI fails because there's no space left on the device to make configuration changes, so I can't seem to remove those OSDs from the Ceph cluster.
Could I do the following?
Power off the node.
Remove it from the Proxmox cluster by using the pvecm delnode command
Remove the OSDs from another working node (not sure if this is possible or would work?)
Reinstall Proxmox
Rejoin Cluster
Blank the disks, and re-add them as OSDs to the Ceph cluster.
One thing, the working nodes are running PVE 5.3-11 and the only ISO I can find is 5.4-1, would having that bad node running 5.4-1 and the others running 5.3-11 cause any issues?
Thanks in advance, and sorry for all the questions, I just don't want to mess up the cluster as several critical systems run on it.
I have a node in one of my clusters that the disk is completely full and I can't find out what's taking up the space, so I'd like to re-format and reinstall Proxmox and rejoin it to my cluster.
It also has 8 OSDs which are part of the cluster's Ceph storage.
What is the best way to remove this node and get it back into the Proxmox cluster and Ceph cluster as well? Any command I do from the GUI fails because there's no space left on the device to make configuration changes, so I can't seem to remove those OSDs from the Ceph cluster.
Could I do the following?
Power off the node.
Remove it from the Proxmox cluster by using the pvecm delnode command
Remove the OSDs from another working node (not sure if this is possible or would work?)
Reinstall Proxmox
Rejoin Cluster
Blank the disks, and re-add them as OSDs to the Ceph cluster.
One thing, the working nodes are running PVE 5.3-11 and the only ISO I can find is 5.4-1, would having that bad node running 5.4-1 and the others running 5.3-11 cause any issues?
Thanks in advance, and sorry for all the questions, I just don't want to mess up the cluster as several critical systems run on it.