Hello all,
i've allready reinstalled a cluster-member and added it first to the pmx-cluster an than to the ceph-cluster. This all went well.
But now i want to add a new inserted HDD as osd into the pool, and i got an error.
I deleted an osd called "osd.4" in the past. Is this the reason for this fault?
Can i recycle the name for reusing it?
i've allready reinstalled a cluster-member and added it first to the pmx-cluster an than to the ceph-cluster. This all went well.
But now i want to add a new inserted HDD as osd into the pool, and i got an error.
Code:
create OSD on /dev/sdc (bluestore)
wipe disk/partition: /dev/sdc
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.267941 s, 783 MB/s
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f3aa4fa0-6e85-4232-96a6-b64996501817
stderr: 2020-02-24 11:25:44.525 7feef0ba7700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
stderr: 2020-02-24 11:25:44.525 7feef0ba7700 -1 AuthRegistry(0x7feeec080e98) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: Error EEXIST: entity osd.4 exists but key does not match
--> RuntimeError: Unable to create a new OSD id
TASK ERROR: command 'ceph-volume lvm create --cluster-fsid 9ddb2f42-b83b-4bdd-a8fc-47e16f6665b0 --data /dev/sdc' failed: exit code 1
I deleted an osd called "osd.4" in the past. Is this the reason for this fault?
Can i recycle the name for reusing it?