I have a 4 node proxmox cluster on which I setup ceph squid 19.2.0. I am just setting things up so I only have proxmox and ceph running on the nodes so far. I created an osd on each of nodes 1, 2, and 3 and ceph was reporting health OK. I did not go any further so there are no pools created. Only the ceph monitors, manager and osd's. Before I go further I wish to change the harddrive on node 2 so I started the process of removing the osd. I used this series of commands:
Before I started osd.0 (the one I am workign on) shows a status of active+clean and in/up
commands issued:
ceph osd out osd.0 (result was that the osd was marked as out, and up so good so far)
systemctl stop ceph-osd@osd.0 (result is the osd status still shows as out and up and active+clean+remapped)
I read that it can be helpful in small clusters for weight to be set to zero before stopping the osd, so I marked it as back in, a status change which was successfully verified by ceph however the (i guess the PG) is reporting active+clean+remapped.
The problem is I cannot seem to take the status to down after the osd is changed to out.
In this setup there has been no stored data so far and no services actively generating any data and there are no ceph pools created yet.
Hopefully someone can help me correct things so I can cleanly remove this osd.
Before I started osd.0 (the one I am workign on) shows a status of active+clean and in/up
commands issued:
ceph osd out osd.0 (result was that the osd was marked as out, and up so good so far)
systemctl stop ceph-osd@osd.0 (result is the osd status still shows as out and up and active+clean+remapped)
I read that it can be helpful in small clusters for weight to be set to zero before stopping the osd, so I marked it as back in, a status change which was successfully verified by ceph however the (i guess the PG) is reporting active+clean+remapped.
The problem is I cannot seem to take the status to down after the osd is changed to out.
In this setup there has been no stored data so far and no services actively generating any data and there are no ceph pools created yet.
Hopefully someone can help me correct things so I can cleanly remove this osd.