Because the old machine has been destroyed, it will be replaced by a new machine, the name will still be P3If I understand you correctly, the node p3 is currently down? Will it ever be back or will it be replaced by a freshly installed one?
Okay. For Proxmox VE you should follow the guide on how to remove a node from the cluster if you have not done so yet. Also checkout the note at the end mentioning that stored SSH fingerprints need to be cleaned manually.Because the old machine has been destroyed, it will be replaced by a new machine, the name will still be P3
ceph mon remove {mon-id}
Okay. For Proxmox VE you should follow the guide on how to remove a node from the cluster if you have not done so yet. Also checkout the note at the end mentioning that stored SSH fingerprints need to be cleaned manually.
Since the old node is not present anymore, any steps regarding systemd services should be obsolete. The one thing you need to do manually is to remove the mon from Ceph: https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_remove_a_cluster_node
Code:ceph mon remove {mon-id}
With that, it will hopefully be gone
root@p1:~# pveceph mon destroy p3Okay. For Proxmox VE you should follow the guide on how to remove a node from the cluster if you have not done so yet. Also checkout the note at the end mentioning that stored SSH fingerprints need to be cleaned manually.
Since the old node is not present anymore, any steps regarding systemd services should be obsolete. The one thing you need to do manually is to remove the mon from Ceph: https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_remove_a_cluster_node
Code:ceph mon remove {mon-id}
With that, it will hopefully be gone
But I modified the ceph.conf, and it still can't be cleared.Are they remnants in the/etc/pve/ceph.conf
file? IP addresses, config sections?
Because we don't know where the configuration path on pve is, otherwise we can find it and delete it, but I recently researched and found that: pve will limit the cluster to at least a few osd and monitors. This problem makes you unable to delete it in the end. It means that the establishment of ceph on pve is irreversible.. There is currently no official solutionroot@virt01:/var/lib/ceph# pveceph createmon --monid virt01 --mon-address 10.0.0.101
monitor 'virt01' already exists
Did you check the hints mentioned ealier in the thread? systemd unit, still mentioned in the Ceph config? Is the directory of the mon still present?root@virt01:/var/lib/ceph# pveceph createmon --monid virt01 --mon-address 10.0.0.101
monitor 'virt01' already exists
/var/lib/ceph/ceph-mon/*
?ceph -s
?systemctl disable ceph-mon.target
ceph mon dump
on the remaining nodes?Do I understand it correctly, and you also have a few OSDs still lingering around from the destroyed node? If so, have a look at the Ceph documentation on how to manually remove OSDs from the clusters crush map: https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/#removing-the-osdbut I recently researched and found that: pve will limit the cluster to at least a few osd and monitors
ceph osd purge {id} --yes-i-really-mean-it
In the environment at that time, I have used ceph's command:@zisain What do you get if you runceph mon dump
on the remaining nodes?
Do I understand it correctly, and you also have a few OSDs still lingering around from the destroyed node? If so, have a look at the Ceph documentation on how to manually remove OSDs from the clusters crush map: https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/#removing-the-osd
ceph osd purge {id} --yes-i-really-mean-it
rm -rf /var/lib/ceph/mon/ceph-test/
TASK ERROR: command '/bin/systemctl start ceph-mon@test' failed: exit code 1
/bin/systemctl daemon-reload
/bin/systemctl enable ceph-mon@test
/bin/systemctl start ceph-mon@test