entity osd.5 exists but key does not match

jsterr

Renowned Member
Jul 24, 2020
860
248
88
33
root@pve01:~# pveceph osd create /dev/sdh
create OSD on /dev/sdh (bluestore)
wipe disk/partition: /dev/sdh
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.574679 s, 365 MB/s
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 07cebf46-433f-421b-9493-0719348668b9
stderr: 2021-06-02T15:31:18.547+0200 7ff241618700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
stderr: 2021-06-02T15:31:18.547+0200 7ff241618700 -1 AuthRegistry(0x7ff23c0596e0) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: Error EEXIST: entity osd.5 exists but key does not match
--> RuntimeError: Unable to create a new OSD id
command 'ceph-volume lvm create --cluster-fsid 640b4633-5d72-4ab2-8142-3a06a332ebd3 --data /dev/sdh' failed: exit code 1

It took ages to create the osd on /dev/sdh so I stopped the command after a few minutes. after that I was not able to recreate the osd again.


root@pve01:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 17.46544 root default -3 17.46544 host pve01 0 ssd 3.49309 osd.0 up 1.00000 1.00000 1 ssd 3.49309 osd.1 up 1.00000 1.00000 2 ssd 3.49309 osd.2 up 1.00000 1.00000 3 ssd 3.49309 osd.3 up 1.00000 1.00000 4 ssd 3.49309 osd.4 up 1.00000 1.00000

root@pve01:~# ceph -v ceph version 15.2.11 (64a4c04e6850c6d9086e4c37f57c4eada541b05e) octopus (stable

Any tips? disk is gpt formated with sgdisk -u R /dev/sdh - I also did a wipefs -af /dev/sdh before and removing lvm with vg/pv/lv remove.
 
Last edited:
Ah I should have used google (did try but was not lucky first time)

ceph auth del osd.5

did it - dont know why this error happend - first time it happend on a deployment.