create OSD -> always "partitions"

fips

Renowned Member
May 5, 2014
175
7
83
Hi,

I wanted to add a new ssd and make an OSD out of it, but after 'create: OSD' its marked as 'partitions'.
Ok, its not the first time in my life, so I issued
Code:
dd if=/dev/zero of=/dev/sdd count=1000000
followed by
Code:
ceph zap /dev/sdd
But still 'partitions'...
I deleted all partitions with fdisk and parted, issued few times dd and cep zap but somehome this stupid disk is immune against all my commands.

Any ideas how I can force this disk to become an osd?
 
The kernel may not have updated the seen partition tables. Try with partprobe to rescan or reboot the node. Our current code handles disks if the have an empty GPT, it even does a 'dd' to zero them before use.
 
ok tried both partprobe and reboot but that ssd is not going to be turned into an osd..

Now I tried to add a completely new SSD but I receive the same result.

Can be the node the faulty part?

What can I try? beside reinstalling proxmox ;-)
 
Check the 'ceph osd tree' and the 'ceph auth list' for any leftovers and go through the ceph logs, as it may tell what there reason is.
 
hmm.. i tried to add the ssd again and check the output of syslog:
Code:
Feb 13 11:10:00 ceph8 systemd[1]: Starting Proxmox VE replication runner...
Feb 13 11:10:01 ceph8 systemd[1]: Started Proxmox VE replication runner.
Feb 13 11:10:52 ceph8 pvedaemon[2536]: <root@pam> starting task UPID:ceph8:000A1915:006EB3E6:5C63ED2C:cephcreateosd:sdd:root@pam:
Feb 13 11:10:55 ceph8 kernel: [72552.761352]  sdd:
Feb 13 11:10:55 ceph8 kernel: [72552.976273]  sdd:
Feb 13 11:10:56 ceph8 kernel: [72553.171153]  sdd:
Feb 13 11:10:57 ceph8 kernel: [72554.314392]  sdd: sdd1
Feb 13 11:10:57 ceph8 kernel: [72554.783774]  sdd: sdd1
Feb 13 11:10:57 ceph8 kernel: [72554.957242]  sdd: sdd1
Feb 13 11:10:59 ceph8 kernel: [72556.101199]  sdd: sdd1 sdd2
Feb 13 11:10:59 ceph8 systemd[1]: Starting Ceph disk activation: /dev/sdd2...
Feb 13 11:10:59 ceph8 sh[661922]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdd2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7f81e5ab11b8>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
Feb 13 11:10:59 ceph8 sh[661922]: command: Running command: /sbin/init --version
Feb 13 11:10:59 ceph8 sh[661922]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdd2
Feb 13 11:10:59 ceph8 sh[661922]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:10:59 ceph8 sh[661922]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:10:59 ceph8 sh[661922]: main_trigger: trigger /dev/sdd2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff106 uuid 347a4aa7-a42f-42b6-affa-a2b2d98f9ed6
Feb 13 11:10:59 ceph8 sh[661922]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdd2
Feb 13 11:10:59 ceph8 kernel: [72556.972537]  sdd: sdd1 sdd2
Feb 13 11:11:00 ceph8 systemd[1]: Starting Proxmox VE replication runner...
Feb 13 11:11:00 ceph8 sh[661922]: main_trigger:
Feb 13 11:11:00 ceph8 sh[661922]: main_trigger: get_dm_uuid: get_dm_uuid /dev/sdd2 uuid path is /sys/dev/block/8:50/dm/uuid
Feb 13 11:11:00 ceph8 sh[661922]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:00 ceph8 sh[661922]: command: Running command: /usr/bin/ceph-osd --get-device-fsid /dev/sdd2
Feb 13 11:11:00 ceph8 sh[661922]: get_space_osd_uuid: Block /dev/sdd2 has OSD UUID 00000000-0000-0000-0000-000000000000
Feb 13 11:11:00 ceph8 sh[661922]: main_activate_space: activate: OSD device not present, not starting, yet
Feb 13 11:11:00 ceph8 systemd[1]: Started Ceph disk activation: /dev/sdd2.
Feb 13 11:11:00 ceph8 systemd[1]: Starting Ceph disk activation: /dev/sdd2...
Feb 13 11:11:00 ceph8 kernel: [72557.442507]  sdd: sdd1 sdd2
Feb 13 11:11:00 ceph8 sh[662018]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdd2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7fa7babfd1b8>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
Feb 13 11:11:00 ceph8 sh[662018]: command: Running command: /sbin/init --version
Feb 13 11:11:00 ceph8 sh[662018]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdd2
Feb 13 11:11:00 ceph8 sh[662018]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:00 ceph8 sh[662018]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:00 ceph8 sh[662018]: main_trigger: trigger /dev/sdd2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff106 uuid 347a4aa7-a42f-42b6-affa-a2b2d98f9ed6
Feb 13 11:11:00 ceph8 sh[662018]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdd2
Feb 13 11:11:00 ceph8 systemd[1]: Started Proxmox VE replication runner.
Feb 13 11:11:00 ceph8 systemd[1]: Stopped Ceph disk activation: /dev/sdd2.
Feb 13 11:11:00 ceph8 systemd[1]: Starting Ceph disk activation: /dev/sdd2...
Feb 13 11:11:00 ceph8 sh[662018]: main_trigger:
Feb 13 11:11:00 ceph8 sh[662018]: main_trigger: get_dm_uuid: get_dm_uuid /dev/sdd2 uuid path is /sys/dev/block/8:50/dm/uuid
Feb 13 11:11:00 ceph8 sh[662018]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:00 ceph8 sh[662018]: command: Running command: /usr/bin/ceph-osd --get-device-fsid /dev/sdd2
Feb 13 11:11:00 ceph8 sh[662018]: get_space_osd_uuid: Block /dev/sdd2 has OSD UUID 00000000-0000-0000-0000-000000000000
Feb 13 11:11:00 ceph8 sh[662018]: main_activate_space: activate: OSD device not present, not starting, yet
Feb 13 11:11:01 ceph8 sh[662056]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdd2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7f467b0861b8>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
Feb 13 11:11:01 ceph8 sh[662056]: command: Running command: /sbin/init --version
Feb 13 11:11:01 ceph8 sh[662056]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdd2
Feb 13 11:11:01 ceph8 sh[662056]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:01 ceph8 sh[662056]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:01 ceph8 sh[662056]: main_trigger: trigger /dev/sdd2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff106 uuid 347a4aa7-a42f-42b6-affa-a2b2d98f9ed6
Feb 13 11:11:01 ceph8 sh[662056]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdd2
Feb 13 11:11:01 ceph8 sh[662056]: main_trigger:
Feb 13 11:11:01 ceph8 sh[662056]: main_trigger: Traceback (most recent call last):
Feb 13 11:11:01 ceph8 sh[662056]:   File "/usr/sbin/ceph-disk", line 11, in <module>
Feb 13 11:11:01 ceph8 sh[662056]:     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
Feb 13 11:11:01 ceph8 sh[662056]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5735, in run
Feb 13 11:11:01 ceph8 sh[662056]:     main(sys.argv[1:])
Feb 13 11:11:01 ceph8 sh[662056]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5686, in main
Feb 13 11:11:01 ceph8 sh[662056]:     args.func(args)
Feb 13 11:11:01 ceph8 sh[662056]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5436, in <lambda>
Feb 13 11:11:01 ceph8 sh[662056]:     func=lambda args: main_activate_space(name, args),
Feb 13 11:11:01 ceph8 sh[662056]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4139, in main_activate_space
Feb 13 11:11:01 ceph8 sh[662056]:     raise Error('%s does not exist' % args.dev)
Feb 13 11:11:01 ceph8 sh[662056]: ceph_disk.main.Error: Error: /dev/sdd2 does not exist
Feb 13 11:11:01 ceph8 sh[662056]: Traceback (most recent call last):
Feb 13 11:11:01 ceph8 sh[662056]:   File "/usr/sbin/ceph-disk", line 11, in <module>
Feb 13 11:11:01 ceph8 sh[662056]:     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
Feb 13 11:11:01 ceph8 kernel: [72558.593107]  sdd: sdd1 sdd2
Feb 13 11:11:01 ceph8 sh[662056]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5735, in run
Feb 13 11:11:01 ceph8 sh[662056]:     main(sys.argv[1:])
Feb 13 11:11:01 ceph8 sh[662056]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5686, in main
Feb 13 11:11:01 ceph8 sh[662056]:     args.func(args)
Feb 13 11:11:01 ceph8 sh[662056]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4889, in main_trigger
Feb 13 11:11:01 ceph8 sh[662056]:     raise Error('return code ' + str(ret))
Feb 13 11:11:01 ceph8 sh[662056]: ceph_disk.main.Error: Error: return code 1
Feb 13 11:11:01 ceph8 systemd[1]: ceph-disk@dev-sdd2.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 11:11:01 ceph8 systemd[1]: Failed to start Ceph disk activation: /dev/sdd2.
Feb 13 11:11:01 ceph8 systemd[1]: ceph-disk@dev-sdd2.service: Unit entered failed state.
Feb 13 11:11:01 ceph8 systemd[1]: ceph-disk@dev-sdd2.service: Failed with result 'exit-code'.
Feb 13 11:11:01 ceph8 systemd[1]: Starting Ceph disk activation: /dev/sdd2...
Feb 13 11:11:02 ceph8 sh[662126]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdd2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7fb26ac2f1b8>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
Feb 13 11:11:02 ceph8 sh[662126]: command: Running command: /sbin/init --version
Feb 13 11:11:02 ceph8 sh[662126]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdd2
Feb 13 11:11:02 ceph8 sh[662126]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:02 ceph8 sh[662126]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:02 ceph8 sh[662126]: main_trigger: trigger /dev/sdd2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff106 uuid 347a4aa7-a42f-42b6-affa-a2b2d98f9ed6
Feb 13 11:11:02 ceph8 sh[662126]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdd2
Feb 13 11:11:02 ceph8 kernel: [72559.460452]  sdd: sdd1 sdd2
Feb 13 11:11:02 ceph8 sh[662126]: main_trigger:
Feb 13 11:11:02 ceph8 sh[662126]: main_trigger: get_dm_uuid: get_dm_uuid /dev/sdd2 uuid path is /sys/dev/block/8:50/dm/uuid
Feb 13 11:11:02 ceph8 sh[662126]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:02 ceph8 sh[662126]: command: Running command: /usr/bin/ceph-osd --get-device-fsid /dev/sdd2
Feb 13 11:11:02 ceph8 sh[662126]: get_space_osd_uuid: Block /dev/sdd2 has OSD UUID 00000000-0000-0000-0000-000000000000
Feb 13 11:11:02 ceph8 sh[662126]: main_activate_space: activate: OSD device not present, not starting, yet
Feb 13 11:11:02 ceph8 systemd[1]: Started Ceph disk activation: /dev/sdd2.
Feb 13 11:11:02 ceph8 systemd[1]: Starting Ceph disk activation: /dev/sdd2...
Feb 13 11:11:02 ceph8 kernel: [72559.892763] XFS (sdd1): Mounting V5 Filesystem
Feb 13 11:11:02 ceph8 kernel: [72559.917185] XFS (sdd1): Ending clean mount
Feb 13 11:11:02 ceph8 kernel: [72560.012408] XFS (sdd1): Unmounting Filesystem
Feb 13 11:11:03 ceph8 sh[662200]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdd2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7fe8250911b8>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
Feb 13 11:11:03 ceph8 sh[662200]: command: Running command: /sbin/init --version
Feb 13 11:11:03 ceph8 sh[662200]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdd2
Feb 13 11:11:03 ceph8 sh[662200]: /bin/chown: cannot access '/dev/sdd2': No such file or directory
Feb 13 11:11:03 ceph8 sh[662200]: Traceback (most recent call last):
Feb 13 11:11:03 ceph8 sh[662200]:   File "/usr/sbin/ceph-disk", line 11, in <module>
Feb 13 11:11:03 ceph8 sh[662200]:     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
Feb 13 11:11:03 ceph8 sh[662200]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5735, in run
Feb 13 11:11:03 ceph8 sh[662200]:     main(sys.argv[1:])
Feb 13 11:11:03 ceph8 sh[662200]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5686, in main
Feb 13 11:11:03 ceph8 sh[662200]:     args.func(args)
Feb 13 11:11:03 ceph8 sh[662200]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4788, in main_trigger
Feb 13 11:11:03 ceph8 sh[662200]:     command_check_call(['chown', 'ceph:ceph', args.dev])
Feb 13 11:11:03 ceph8 sh[662200]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 566, in command_check_call
Feb 13 11:11:03 ceph8 sh[662200]:     return subprocess.check_call(arguments)
Feb 13 11:11:03 ceph8 sh[662200]:   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
Feb 13 11:11:03 ceph8 sh[662200]:     raise CalledProcessError(retcode, cmd)
Feb 13 11:11:03 ceph8 kernel: [72560.115531]  sdd: sdd1 sdd2
Feb 13 11:11:03 ceph8 sh[662200]: subprocess.CalledProcessError: Command '['/bin/chown', 'ceph:ceph', '/dev/sdd2']' returned non-zero exit status 1
Feb 13 11:11:03 ceph8 systemd[1]: ceph-disk@dev-sdd2.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 11:11:03 ceph8 systemd[1]: Failed to start Ceph disk activation: /dev/sdd2.
Feb 13 11:11:03 ceph8 systemd[1]: ceph-disk@dev-sdd2.service: Unit entered failed state.
Feb 13 11:11:03 ceph8 systemd[1]: ceph-disk@dev-sdd2.service: Failed with result 'exit-code'.
Feb 13 11:11:03 ceph8 systemd[1]: Starting Ceph disk activation: /dev/sdd2...
Feb 13 11:11:03 ceph8 systemd[1]: Starting Ceph disk activation: /dev/sdd1...
Feb 13 11:11:03 ceph8 sh[662255]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdd2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7f5599ca91b8>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
Feb 13 11:11:03 ceph8 sh[662255]: command: Running command: /sbin/init --version
Feb 13 11:11:03 ceph8 sh[662259]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdd1', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7f8aa67241b8>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
Feb 13 11:11:03 ceph8 sh[662259]: command: Running command: /sbin/init --version
Feb 13 11:11:03 ceph8 sh[662255]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdd2
Feb 13 11:11:03 ceph8 sh[662259]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdd1
Feb 13 11:11:03 ceph8 sh[662255]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:03 ceph8 sh[662259]: command: Running command: /sbin/blkid -o udev -p /dev/sdd1
Feb 13 11:11:03 ceph8 sh[662259]: command: Running command: /sbin/blkid -o udev -p /dev/sdd1
Feb 13 11:11:03 ceph8 sh[662255]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:03 ceph8 sh[662259]: main_trigger: trigger /dev/sdd1 parttype 4fbd7e29-9d25-41b8-afd0-062c0ceff05d uuid 418b141a-6d97-4739-8c43-58cc40424e82
Feb 13 11:11:03 ceph8 sh[662259]: command: Running command: /usr/sbin/ceph-disk --verbose activate /dev/sdd1
Feb 13 11:11:03 ceph8 sh[662255]: main_trigger: trigger /dev/sdd2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff106 uuid 347a4aa7-a42f-42b6-affa-a2b2d98f9ed6
Feb 13 11:11:03 ceph8 sh[662255]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdd2
Feb 13 11:11:04 ceph8 kernel: [72561.157286] XFS (sdd1): Mounting V5 Filesystem
Feb 13 11:11:04 ceph8 kernel: [72561.167243] XFS (sdd1): Ending clean mount
Feb 13 11:11:04 ceph8 kernel: [72561.562459] XFS (sdd1): Unmounting Filesystem
Feb 13 11:11:04 ceph8 sh[662259]: main_trigger:
Feb 13 11:11:04 ceph8 sh[662259]: main_trigger: main_activate: path = /dev/sdd1
Feb 13 11:11:04 ceph8 sh[662259]: get_dm_uuid: get_dm_uuid /dev/sdd1 uuid path is /sys/dev/block/8:49/dm/uuid
Feb 13 11:11:04 ceph8 sh[662259]: command: Running command: /sbin/blkid -o udev -p /dev/sdd1
Feb 13 11:11:04 ceph8 sh[662259]: command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdd1
Feb 13 11:11:04 ceph8 sh[662259]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
Feb 13 11:11:04 ceph8 sh[662259]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
Feb 13 11:11:04 ceph8 sh[662259]: mount: Mounting /dev/sdd1 on /var/lib/ceph/tmp/mnt.ef06kJ with options noatime,inode64
Feb 13 11:11:04 ceph8 sh[662259]: command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdd1 /var/lib/ceph/tmp/mnt.ef06kJ
Feb 13 11:11:04 ceph8 sh[662259]: activate: Cluster uuid is c4d0e591-a919-4df0-8627-d2fda956f7ff
Feb 13 11:11:04 ceph8 sh[662259]: command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
Feb 13 11:11:04 ceph8 sh[662259]: activate: Cluster name is ceph
Feb 13 11:11:04 ceph8 sh[662259]: activate: OSD uuid is 418b141a-6d97-4739-8c43-58cc40424e82
Feb 13 11:11:04 ceph8 sh[662259]: allocate_osd_id: Allocating OSD id...
Feb 13 11:11:04 ceph8 sh[662259]: command: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 13 11:11:04 ceph8 sh[662259]: __init__: stderr
Feb 13 11:11:04 ceph8 sh[662259]: command_with_stdin: Running command with stdin: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 418b141a-6d97-4739-8c43-58cc40424e82
Feb 13 11:11:04 ceph8 sh[662259]: command_with_stdin:
Feb 13 11:11:04 ceph8 sh[662259]: command_with_stdin: Error EEXIST: entity osd.1 exists but key does not match
Feb 13 11:11:04 ceph8 sh[662259]: mount_activate: Failed to activate
Feb 13 11:11:04 ceph8 sh[662259]: unmount: Unmounting /var/lib/ceph/tmp/mnt.ef06kJ
Feb 13 11:11:04 ceph8 sh[662259]: command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.ef06kJ
Feb 13 11:11:04 ceph8 sh[662259]: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'418b141a-6d97-4739-8c43-58cc40424e82']' failed with status code 17
Feb 13 11:11:04 ceph8 sh[662259]: Traceback (most recent call last):
Feb 13 11:11:04 ceph8 sh[662259]:   File "/usr/sbin/ceph-disk", line 11, in <module>
Feb 13 11:11:04 ceph8 sh[662259]:     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
Feb 13 11:11:04 ceph8 sh[662259]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5735, in run
Feb 13 11:11:04 ceph8 sh[662259]:     main(sys.argv[1:])
Feb 13 11:11:04 ceph8 sh[662259]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5686, in main
Feb 13 11:11:04 ceph8 sh[662259]:     args.func(args)
Feb 13 11:11:04 ceph8 sh[662259]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4889, in main_trigger
Feb 13 11:11:04 ceph8 sh[662259]:     raise Error('return code ' + str(ret))
Feb 13 11:11:04 ceph8 sh[662259]: ceph_disk.main.Error: Error: return code 1
Feb 13 11:11:04 ceph8 systemd[1]: ceph-disk@dev-sdd1.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 11:11:04 ceph8 systemd[1]: Failed to start Ceph disk activation: /dev/sdd1.
Feb 13 11:11:04 ceph8 systemd[1]: ceph-disk@dev-sdd1.service: Unit entered failed state.
Feb 13 11:11:04 ceph8 systemd[1]: ceph-disk@dev-sdd1.service: Failed with result 'exit-code'.
Feb 13 11:11:04 ceph8 sh[662255]: main_trigger:
Feb 13 11:11:04 ceph8 sh[662255]: main_trigger: get_dm_uuid: get_dm_uuid /dev/sdd2 uuid path is /sys/dev/block/8:50/dm/uuid
Feb 13 11:11:04 ceph8 sh[662255]: command: Running command: /sbin/blkid -o udev -p /dev/sdd2
Feb 13 11:11:04 ceph8 sh[662255]: command: Running command: /usr/bin/ceph-osd --get-device-fsid /dev/sdd2
Feb 13 11:11:04 ceph8 sh[662255]: get_space_osd_uuid: Block /dev/sdd2 has OSD UUID 00000000-0000-0000-0000-000000000000
Feb 13 11:11:04 ceph8 sh[662255]: main_activate_space: activate: OSD device not present, not starting, yet
Feb 13 11:11:04 ceph8 systemd[1]: Started Ceph disk activation: /dev/sdd2.
Feb 13 11:11:04 ceph8 pvedaemon[2536]: <root@pam> end task UPID:ceph8:000A1915:006EB3E6:5C63ED2C:cephcreateosd:sdd:root@pam: OK
Feb 13 11:11:04 ceph8 systemd[1]: Starting Ceph disk activation: /dev/sdd1...
Feb 13 11:11:05 ceph8 sh[662393]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdd1', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7fd5b8da41b8>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
Feb 13 11:11:05 ceph8 sh[662393]: command: Running command: /sbin/init --version
Feb 13 11:11:05 ceph8 sh[662393]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdd1
Feb 13 11:11:05 ceph8 sh[662393]: command: Running command: /sbin/blkid -o udev -p /dev/sdd1
Feb 13 11:11:05 ceph8 sh[662393]: command: Running command: /sbin/blkid -o udev -p /dev/sdd1
Feb 13 11:11:05 ceph8 sh[662393]: main_trigger: trigger /dev/sdd1 parttype 4fbd7e29-9d25-41b8-afd0-062c0ceff05d uuid 418b141a-6d97-4739-8c43-58cc40424e82
Feb 13 11:11:05 ceph8 sh[662393]: command: Running command: /usr/sbin/ceph-disk --verbose activate /dev/sdd1
Feb 13 11:11:05 ceph8 kernel: [72562.658428] XFS (sdd1): Mounting V5 Filesystem
Feb 13 11:11:05 ceph8 kernel: [72562.668806] XFS (sdd1): Ending clean mount
Feb 13 11:11:06 ceph8 kernel: [72563.070439] XFS (sdd1): Unmounting Filesystem
Feb 13 11:11:06 ceph8 sh[662393]: main_trigger:
Feb 13 11:11:06 ceph8 sh[662393]: main_trigger: main_activate: path = /dev/sdd1
Feb 13 11:11:06 ceph8 sh[662393]: get_dm_uuid: get_dm_uuid /dev/sdd1 uuid path is /sys/dev/block/8:49/dm/uuid
Feb 13 11:11:06 ceph8 sh[662393]: command: Running command: /sbin/blkid -o udev -p /dev/sdd1
Feb 13 11:11:06 ceph8 sh[662393]: command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdd1
Feb 13 11:11:06 ceph8 sh[662393]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
Feb 13 11:11:06 ceph8 sh[662393]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
Feb 13 11:11:06 ceph8 sh[662393]: mount: Mounting /dev/sdd1 on /var/lib/ceph/tmp/mnt.vP2I5s with options noatime,inode64
Feb 13 11:11:06 ceph8 sh[662393]: command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdd1 /var/lib/ceph/tmp/mnt.vP2I5s
Feb 13 11:11:06 ceph8 sh[662393]: activate: Cluster uuid is c4d0e591-a919-4df0-8627-d2fda956f7ff
Feb 13 11:11:06 ceph8 sh[662393]: command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
Feb 13 11:11:06 ceph8 sh[662393]: activate: Cluster name is ceph
Feb 13 11:11:06 ceph8 sh[662393]: activate: OSD uuid is 418b141a-6d97-4739-8c43-58cc40424e82
Feb 13 11:11:06 ceph8 sh[662393]: allocate_osd_id: Allocating OSD id...
Feb 13 11:11:06 ceph8 sh[662393]: command: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 13 11:11:06 ceph8 sh[662393]: __init__: stderr
Feb 13 11:11:06 ceph8 sh[662393]: command_with_stdin: Running command with stdin: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 418b141a-6d97-4739-8c43-58cc40424e82
Feb 13 11:11:06 ceph8 sh[662393]: command_with_stdin:
Feb 13 11:11:06 ceph8 sh[662393]: command_with_stdin: Error EEXIST: entity osd.1 exists but key does not match
Feb 13 11:11:06 ceph8 sh[662393]: mount_activate: Failed to activate
Feb 13 11:11:06 ceph8 sh[662393]: unmount: Unmounting /var/lib/ceph/tmp/mnt.vP2I5s
Feb 13 11:11:06 ceph8 sh[662393]: command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.vP2I5s
Feb 13 11:11:06 ceph8 sh[662393]: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'418b141a-6d97-4739-8c43-58cc40424e82']' failed with status code 17
Feb 13 11:11:06 ceph8 sh[662393]: Traceback (most recent call last):
Feb 13 11:11:06 ceph8 sh[662393]:   File "/usr/sbin/ceph-disk", line 11, in <module>
Feb 13 11:11:06 ceph8 sh[662393]:     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
Feb 13 11:11:06 ceph8 sh[662393]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5735, in run
Feb 13 11:11:06 ceph8 sh[662393]:     main(sys.argv[1:])
Feb 13 11:11:06 ceph8 sh[662393]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5686, in main
Feb 13 11:11:06 ceph8 sh[662393]:     args.func(args)
Feb 13 11:11:06 ceph8 sh[662393]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4889, in main_trigger
Feb 13 11:11:06 ceph8 sh[662393]:     raise Error('return code ' + str(ret))
Feb 13 11:11:06 ceph8 sh[662393]: ceph_disk.main.Error: Error: return code 1
Feb 13 11:11:06 ceph8 systemd[1]: ceph-disk@dev-sdd1.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 11:11:06 ceph8 systemd[1]: Failed to start Ceph disk activation: /dev/sdd1.
Feb 13 11:11:06 ceph8 systemd[1]: ceph-disk@dev-sdd1.service: Unit entered failed state.
Feb 13 11:11:06 ceph8 systemd[1]: ceph-disk@dev-sdd1.service: Failed with result 'exit-code'.

Log says that osd.1 exists but key does not match...
So I checked osd tree list but there is no osd.1
Afterwards I checked ceph auth list and there is an entry for osd.1
But I can't remove it:
Code:
 ceph osd out 1
osd.1 does not exist.

ceph osd destroy 1  --yes-i-really-mean-it
osd.1 does not exist


How can I kick it out?
 
If the OSD is really not in use anymore you can remove the entry with 'ceph auth del osd.1>.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!