[SOLVED] Ceph Cannot add OSD

Mihai

Renowned Member
Dec 22, 2015
103
8
83
39
PVE Versions:

Code:
proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve)
pve-manager: 5.2-5 (running version: 5.2-5/eb24855a)
pve-kernel-4.15: 5.2-4
pve-kernel-4.13: 5.2-2
pve-kernel-4.15.18-1-pve: 4.15.18-15
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.16-4-pve: 4.13.16-51
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.16-1-pve: 4.13.16-46
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-1-pve: 4.13.13-31
pve-kernel-4.13.8-1-pve: 4.13.8-27
pve-kernel-4.13.4-1-pve: 4.13.4-26
ceph: 12.2.5-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-35
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-1
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-28
pve-container: 2.0-24
pve-docs: 5.2-4
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-29
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9

So to give a short story:
  • Original proxmox + ceph installation works great
  • Wanted to create 2 classes ssd and hdd for separate ceph pools
  • With original rule, the ssds were added to the default pool
  • Added replicated_hdd and replicated_ssd rules and deleted default replicated_rule
Now when I try to add the SSD OSDs, they stay down and out, and does not appear in the crush map:

Code:
Virtual Environment 5.2-5

Search
Node 'VMHost2'
No OSD selected

Server View
Logs
()
create OSD on /dev/sdg (bluestore)
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
The operation has completed successfully.
meta-data=/dev/sdg1              isize=2048   agcount=4, agsize=6400 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0, reflink=0
data     =                       bsize=4096   blocks=25600, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=864, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
The operation has completed successfully.
TASK OK

LOG:

Code:
....................................
OTHER STUFF HERE
..................................
2018-07-16 17:04:51.381325 7f7c5a41b700 -1 osd.13 7294038 *** Got signal Terminated ***
2018-07-16 17:04:51.381356 7f7c5a41b700  0 osd.13 7294038 prepare_to_stop telling mon we are shutting down
2018-07-16 17:04:52.209714 7f7c6d4b3700  0 osd.13 7294038 got_stop_ack starting shutdown
2018-07-16 17:04:52.209778 7f7c5a41b700  0 osd.13 7294038 prepare_to_stop starting shutdown
2018-07-16 17:04:52.209804 7f7c5a41b700 -1 osd.13 7294038 shutdown
2018-07-16 17:04:52.212268 7f7c76473700  0 -- 10.10.1.13:6819/3527 >> 10.10.1.16:6810/3073 conn(0x559484361000 :-1 s=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=11 cs=1 l=0).handle_connect_reply connect got BADAUTHORIZER
2018-07-16 17:04:52.212475 7f7c76473700  0 -- 10.10.1.13:6819/3527 >> 10.10.1.16:6810/3073 conn(0x559484361000 :-1 s=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=11 cs=1 l=0).handle_connect_reply connect got BADAUTHORIZER
2018-07-16 17:04:54.227749 7f7c5a41b700  1 bluestore(/var/lib/ceph/osd/ceph-13) umount
2018-07-16 17:04:54.264484 7f7c5a41b700  1 stupidalloc 0x0x55947de2e0e0 shutdown
2018-07-16 17:04:54.264513 7f7c5a41b700  1 freelist shutdown
2018-07-16 17:04:54.264581 7f7c5a41b700  4 rocksdb: [/home/builder/source/ceph-12.2.5/src/rocksdb/db/db_impl.cc:217] Shutdown: canceling all background work
2018-07-16 17:04:54.270817 7f7c5a41b700  4 rocksdb: [/home/builder/source/ceph-12.2.5/src/rocksdb/db/db_impl.cc:343] Shutdown complete
2018-07-16 17:04:54.273217 7f7c5a41b700  1 bluefs umount
2018-07-16 17:04:54.273239 7f7c5a41b700  1 stupidalloc 0x0x55947de2fa40 shutdown
2018-07-16 17:04:54.273289 7f7c5a41b700  1 bdev(0x55947d9fd200 /var/lib/ceph/osd/ceph-13/block) close
2018-07-16 17:04:54.529396 7f7c5a41b700  1 bdev(0x55947d9fcd80 /var/lib/ceph/osd/ceph-13/block) close
2018-07-17 13:24:41.889970 7f9f49503e00  0 set uid:gid to 64045:64045 (ceph:ceph)
2018-07-17 13:24:41.890000 7f9f49503e00  0 ceph version 12.2.5 (dfcb7b53b2e4fcd2a5af0240d4975adc711ab96e) luminous (stable), process (unknown), pid 387561
2018-07-17 13:24:41.896046 7f9f49503e00  1 bluestore(/var/lib/ceph/tmp/mnt.xPDHRN) mkfs path /var/lib/ceph/tmp/mnt.xPDHRN
2018-07-17 13:24:41.896458 7f9f49503e00  1 bluestore(/var/lib/ceph/tmp/mnt.xPDHRN) mkfs already created
2018-07-17 13:24:41.896481 7f9f49503e00  1 bluestore(/var/lib/ceph/tmp/mnt.xPDHRN) _fsck repair (shallow) start
2018-07-17 13:24:41.896590 7f9f49503e00  1 bdev create path /var/lib/ceph/tmp/mnt.xPDHRN/block type kernel
2018-07-17 13:24:41.896611 7f9f49503e00  1 bdev(0x55d23e79ab40 /var/lib/ceph/tmp/mnt.xPDHRN/block) open path /var/lib/ceph/tmp/mnt.xPDHRN/block
2018-07-17 13:24:41.897052 7f9f49503e00  1 bdev(0x55d23e79ab40 /var/lib/ceph/tmp/mnt.xPDHRN/block) open size 299333824512 (0x45b1afb000, 278 GB) block_size 4096 (4096 B) rotational
2018-07-17 13:24:41.897212 7f9f49503e00 -1 bluestore(/var/lib/ceph/tmp/mnt.xPDHRN/block) _check_or_set_bdev_label bdev /var/lib/ceph/tmp/mnt.xPDHRN/block fsid 73216be4-1fa5-41f9-bf67-25ec253c3939 does not match our fsid 2e8d4cea-e31f-4695-a112-6bb6f01953c4
2018-07-17 13:24:41.897228 7f9f49503e00  1 bdev(0x55d23e79ab40 /var/lib/ceph/tmp/mnt.xPDHRN/block) close
2018-07-17 13:24:42.209484 7f9f49503e00 -1 bluestore(/var/lib/ceph/tmp/mnt.xPDHRN) mkfs fsck found fatal error: (5) Input/output error
2018-07-17 13:24:42.209551 7f9f49503e00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error
2018-07-17 13:24:42.209784 7f9f49503e00 -1 [0;31m ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.xPDHRN: (5) Input/output error[0m
2018-07-17 13:24:44.217361 7f8979cace00  0 set uid:gid to 64045:64045 (ceph:ceph)
2018-07-17 13:24:44.217394 7f8979cace00  0 ceph version 12.2.5 (dfcb7b53b2e4fcd2a5af0240d4975adc711ab96e) luminous (stable), process (unknown), pid 387629
2018-07-17 13:24:44.223459 7f8979cace00  1 bluestore(/var/lib/ceph/tmp/mnt.cH__B8) mkfs path /var/lib/ceph/tmp/mnt.cH__B8
2018-07-17 13:24:44.223875 7f8979cace00  1 bluestore(/var/lib/ceph/tmp/mnt.cH__B8) mkfs already created
2018-07-17 13:24:44.223897 7f8979cace00  1 bluestore(/var/lib/ceph/tmp/mnt.cH__B8) _fsck repair (shallow) start
2018-07-17 13:24:44.223986 7f8979cace00  1 bdev create path /var/lib/ceph/tmp/mnt.cH__B8/block type kernel
2018-07-17 13:24:44.224013 7f8979cace00  1 bdev(0x55e4c44a2b40 /var/lib/ceph/tmp/mnt.cH__B8/block) open path /var/lib/ceph/tmp/mnt.cH__B8/block
2018-07-17 13:24:44.224461 7f8979cace00  1 bdev(0x55e4c44a2b40 /var/lib/ceph/tmp/mnt.cH__B8/block) open size 299333824512 (0x45b1afb000, 278 GB) block_size 4096 (4096 B) rotational
2018-07-17 13:24:44.224611 7f8979cace00 -1 bluestore(/var/lib/ceph/tmp/mnt.cH__B8/block) _check_or_set_bdev_label bdev /var/lib/ceph/tmp/mnt.cH__B8/block fsid 73216be4-1fa5-41f9-bf67-25ec253c3939 does not match our fsid 2e8d4cea-e31f-4695-a112-6bb6f01953c4
2018-07-17 13:24:44.224629 7f8979cace00  1 bdev(0x55e4c44a2b40 /var/lib/ceph/tmp/mnt.cH__B8/block) close
2018-07-17 13:24:44.541474 7f8979cace00 -1 bluestore(/var/lib/ceph/tmp/mnt.cH__B8) mkfs fsck found fatal error: (5) Input/output error
2018-07-17 13:24:44.541528 7f8979cace00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error
2018-07-17 13:24:44.541683 7f8979cace00 -1 [0;31m ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.cH__B8: (5) Input/output error[0m
2018-07-17 13:24:46.572236 7f9ee5af0e00  0 set uid:gid to 64045:64045 (ceph:ceph)
2018-07-17 13:24:46.572268 7f9ee5af0e00  0 ceph version 12.2.5 (dfcb7b53b2e4fcd2a5af0240d4975adc711ab96e) luminous (stable), process (unknown), pid 387736
2018-07-17 13:24:46.578404 7f9ee5af0e00  1 bluestore(/var/lib/ceph/tmp/mnt.4EaAGf) mkfs path /var/lib/ceph/tmp/mnt.4EaAGf
2018-07-17 13:24:46.578776 7f9ee5af0e00  1 bluestore(/var/lib/ceph/tmp/mnt.4EaAGf) mkfs already created
2018-07-17 13:24:46.578791 7f9ee5af0e00  1 bluestore(/var/lib/ceph/tmp/mnt.4EaAGf) _fsck repair (shallow) start
2018-07-17 13:24:46.578915 7f9ee5af0e00  1 bdev create path /var/lib/ceph/tmp/mnt.4EaAGf/block type kernel
2018-07-17 13:24:46.578937 7f9ee5af0e00  1 bdev(0x55ce91de2b40 /var/lib/ceph/tmp/mnt.4EaAGf/block) open path /var/lib/ceph/tmp/mnt.4EaAGf/block
2018-07-17 13:24:46.579383 7f9ee5af0e00  1 bdev(0x55ce91de2b40 /var/lib/ceph/tmp/mnt.4EaAGf/block) open size 299333824512 (0x45b1afb000, 278 GB) block_size 4096 (4096 B) rotational
2018-07-17 13:24:46.579545 7f9ee5af0e00 -1 bluestore(/var/lib/ceph/tmp/mnt.4EaAGf/block) _check_or_set_bdev_label bdev /var/lib/ceph/tmp/mnt.4EaAGf/block fsid 73216be4-1fa5-41f9-bf67-25ec253c3939 does not match our fsid 2e8d4cea-e31f-4695-a112-6bb6f01953c4
2018-07-17 13:24:46.579562 7f9ee5af0e00  1 bdev(0x55ce91de2b40 /var/lib/ceph/tmp/mnt.4EaAGf/block) close
2018-07-17 13:24:46.893470 7f9ee5af0e00 -1 bluestore(/var/lib/ceph/tmp/mnt.4EaAGf) mkfs fsck found fatal error: (5) Input/output error
2018-07-17 13:24:46.893551 7f9ee5af0e00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error
2018-07-17 13:24:46.893787 7f9ee5af0e00 -1 [0;31m ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.4EaAGf: (5) Input/output error[0m
2018-07-17 13:24:48.845298 7fb3a845ee00  0 set uid:gid to 64045:64045 (ceph:ceph)
2018-07-17 13:24:48.845355 7fb3a845ee00  0 ceph version 12.2.5 (dfcb7b53b2e4fcd2a5af0240d4975adc711ab96e) luminous (stable), process (unknown), pid 387800
2018-07-17 13:24:48.851445 7fb3a845ee00  1 bluestore(/var/lib/ceph/tmp/mnt.Sq8t3I) mkfs path /var/lib/ceph/tmp/mnt.Sq8t3I
2018-07-17 13:24:48.851856 7fb3a845ee00  1 bluestore(/var/lib/ceph/tmp/mnt.Sq8t3I) mkfs already created
2018-07-17 13:24:48.851870 7fb3a845ee00  1 bluestore(/var/lib/ceph/tmp/mnt.Sq8t3I) _fsck repair (shallow) start
2018-07-17 13:24:48.851962 7fb3a845ee00  1 bdev create path /var/lib/ceph/tmp/mnt.Sq8t3I/block type kernel
2018-07-17 13:24:48.851983 7fb3a845ee00  1 bdev(0x5571a8006b40 /var/lib/ceph/tmp/mnt.Sq8t3I/block) open path /var/lib/ceph/tmp/mnt.Sq8t3I/block
2018-07-17 13:24:48.852435 7fb3a845ee00  1 bdev(0x5571a8006b40 /var/lib/ceph/tmp/mnt.Sq8t3I/block) open size 299333824512 (0x45b1afb000, 278 GB) block_size 4096 (4096 B) rotational
2018-07-17 13:24:48.852584 7fb3a845ee00 -1 bluestore(/var/lib/ceph/tmp/mnt.Sq8t3I/block) _check_or_set_bdev_label bdev /var/lib/ceph/tmp/mnt.Sq8t3I/block fsid 73216be4-1fa5-41f9-bf67-25ec253c3939 does not match our fsid 2e8d4cea-e31f-4695-a112-6bb6f01953c4
2018-07-17 13:24:48.852601 7fb3a845ee00  1 bdev(0x5571a8006b40 /var/lib/ceph/tmp/mnt.Sq8t3I/block) close
2018-07-17 13:24:49.141466 7fb3a845ee00 -1 bluestore(/var/lib/ceph/tmp/mnt.Sq8t3I) mkfs fsck found fatal error: (5) Input/output error
2018-07-17 13:24:49.141513 7fb3a845ee00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error
2018-07-17 13:24:49.141662 7fb3a845ee00 -1 [0;31m ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.Sq8t3I: (5) Input/output error[0m
2018-07-17 13:40:32.179664 7f07d9eaae00  0 set uid:gid to 64045:64045 (ceph:ceph)
2018-07-17 13:40:32.179697 7f07d9eaae00  0 ceph version 12.2.5 (dfcb7b53b2e4fcd2a5af0240d4975adc711ab96e) luminous (stable), process (unknown), pid 392965
2018-07-17 13:40:32.185799 7f07d9eaae00  1 bluestore(/var/lib/ceph/tmp/mnt.4zvzTC) mkfs path /var/lib/ceph/tmp/mnt.4zvzTC
2018-07-17 13:40:32.186336 7f07d9eaae00  1 bluestore(/var/lib/ceph/tmp/mnt.4zvzTC) mkfs already created
2018-07-17 13:40:32.186351 7f07d9eaae00  1 bluestore(/var/lib/ceph/tmp/mnt.4zvzTC) _fsck repair (shallow) start
2018-07-17 13:40:32.186468 7f07d9eaae00  1 bdev create path /var/lib/ceph/tmp/mnt.4zvzTC/block type kernel
2018-07-17 13:40:32.186493 7f07d9eaae00  1 bdev(0x558ba71f6b40 /var/lib/ceph/tmp/mnt.4zvzTC/block) open path /var/lib/ceph/tmp/mnt.4zvzTC/block
2018-07-17 13:40:32.186995 7f07d9eaae00  1 bdev(0x558ba71f6b40 /var/lib/ceph/tmp/mnt.4zvzTC/block) open size 299333824512 (0x45b1afb000, 278 GB) block_size 4096 (4096 B) rotational
2018-07-17 13:40:32.187274 7f07d9eaae00 -1 bluestore(/var/lib/ceph/tmp/mnt.4zvzTC/block) _check_or_set_bdev_label bdev /var/lib/ceph/tmp/mnt.4zvzTC/block fsid 73216be4-1fa5-41f9-bf67-25ec253c3939 does not match our fsid 2e8d4cea-e31f-4695-a112-6bb6f01953c4
2018-07-17 13:40:32.187292 7f07d9eaae00  1 bdev(0x558ba71f6b40 /var/lib/ceph/tmp/mnt.4zvzTC/block) close
2018-07-17 13:40:32.465439 7f07d9eaae00 -1 bluestore(/var/lib/ceph/tmp/mnt.4zvzTC) mkfs fsck found fatal error: (5) Input/output error
2018-07-17 13:40:32.465488 7f07d9eaae00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error
2018-07-17 13:40:32.465657 7f07d9eaae00 -1 [0;31m ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.4zvzTC: (5) Input/output error[0m
2018-07-17 13:56:54.991925 7f0259bd0e00  0 set uid:gid to 64045:64045 (ceph:ceph)
2018-07-17 13:56:54.991958 7f0259bd0e00  0 ceph version 12.2.5 (dfcb7b53b2e4fcd2a5af0240d4975adc711ab96e) luminous (stable), process (unknown), pid 400146
2018-07-17 13:56:54.998282 7f0259bd0e00  1 bluestore(/var/lib/ceph/tmp/mnt.zIAH5B) mkfs path /var/lib/ceph/tmp/mnt.zIAH5B
2018-07-17 13:56:54.998875 7f0259bd0e00  1 bluestore(/var/lib/ceph/tmp/mnt.zIAH5B) mkfs already created
2018-07-17 13:56:54.998890 7f0259bd0e00  1 bluestore(/var/lib/ceph/tmp/mnt.zIAH5B) _fsck repair (shallow) start
2018-07-17 13:56:54.998997 7f0259bd0e00  1 bdev create path /var/lib/ceph/tmp/mnt.zIAH5B/block type kernel
2018-07-17 13:56:54.999020 7f0259bd0e00  1 bdev(0x562d45392b40 /var/lib/ceph/tmp/mnt.zIAH5B/block) open path /var/lib/ceph/tmp/mnt.zIAH5B/block
2018-07-17 13:56:54.999510 7f0259bd0e00  1 bdev(0x562d45392b40 /var/lib/ceph/tmp/mnt.zIAH5B/block) open size 299333824512 (0x45b1afb000, 278 GB) block_size 4096 (4096 B) rotational
2018-07-17 13:56:54.999788 7f0259bd0e00 -1 bluestore(/var/lib/ceph/tmp/mnt.zIAH5B/block) _check_or_set_bdev_label bdev /var/lib/ceph/tmp/mnt.zIAH5B/block fsid 73216be4-1fa5-41f9-bf67-25ec253c3939 does not match our fsid 2e8d4cea-e31f-4695-a112-6bb6f01953c4
2018-07-17 13:56:54.999806 7f0259bd0e00  1 bdev(0x562d45392b40 /var/lib/ceph/tmp/mnt.zIAH5B/block) close
2018-07-17 13:56:55.293497 7f0259bd0e00 -1 bluestore(/var/lib/ceph/tmp/mnt.zIAH5B) mkfs fsck found fatal error: (5) Input/output error
2018-07-17 13:56:55.293561 7f0259bd0e00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error
2018-07-17 13:56:55.293732 7f0259bd0e00 -1 [0;31m ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.zIAH5B: (5) Input/output error[0m
2018-07-17 14:15:21.941582 7f6ef3649e00  0 set uid:gid to 64045:64045 (ceph:ceph)
2018-07-17 14:15:21.941614 7f6ef3649e00  0 ceph version 12.2.5 (dfcb7b53b2e4fcd2a5af0240d4975adc711ab96e) luminous (stable), process (unknown), pid 407696
2018-07-17 14:15:21.947683 7f6ef3649e00  1 bluestore(/var/lib/ceph/tmp/mnt.JWcUmR) mkfs path /var/lib/ceph/tmp/mnt.JWcUmR
2018-07-17 14:15:21.948236 7f6ef3649e00  1 bluestore(/var/lib/ceph/tmp/mnt.JWcUmR) mkfs already created
2018-07-17 14:15:21.948249 7f6ef3649e00  1 bluestore(/var/lib/ceph/tmp/mnt.JWcUmR) _fsck repair (shallow) start
2018-07-17 14:15:21.948337 7f6ef3649e00  1 bdev create path /var/lib/ceph/tmp/mnt.JWcUmR/block type kernel
2018-07-17 14:15:21.948358 7f6ef3649e00  1 bdev(0x55645f66eb40 /var/lib/ceph/tmp/mnt.JWcUmR/block) open path /var/lib/ceph/tmp/mnt.JWcUmR/block
2018-07-17 14:15:21.948795 7f6ef3649e00  1 bdev(0x55645f66eb40 /var/lib/ceph/tmp/mnt.JWcUmR/block) open size 299333824512 (0x45b1afb000, 278 GB) block_size 4096 (4096 B) rotational
2018-07-17 14:15:21.949071 7f6ef3649e00 -1 bluestore(/var/lib/ceph/tmp/mnt.JWcUmR/block) _check_or_set_bdev_label bdev /var/lib/ceph/tmp/mnt.JWcUmR/block fsid 73216be4-1fa5-41f9-bf67-25ec253c3939 does not match our fsid 2e8d4cea-e31f-4695-a112-6bb6f01953c4
2018-07-17 14:15:21.949088 7f6ef3649e00  1 bdev(0x55645f66eb40 /var/lib/ceph/tmp/mnt.JWcUmR/block) close
2018-07-17 14:15:22.249469 7f6ef3649e00 -1 bluestore(/var/lib/ceph/tmp/mnt.JWcUmR) mkfs fsck found fatal error: (5) Input/output error
2018-07-17 14:15:22.249521 7f6ef3649e00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error
2018-07-17 14:15:22.249674 7f6ef3649e00 -1 [0;31m ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.JWcUmR: (5) Input/output error[0m
2018-07-17 14:28:52.564717 7f255cd18e00  0 set uid:gid to 64045:64045 (ceph:ceph)
2018-07-17 14:28:52.564754 7f255cd18e00  0 ceph version 12.2.5 (dfcb7b53b2e4fcd2a5af0240d4975adc711ab96e) luminous (stable), process (unknown), pid 415320
2018-07-17 14:28:52.570920 7f255cd18e00  1 bluestore(/var/lib/ceph/tmp/mnt.2Mq5HX) mkfs path /var/lib/ceph/tmp/mnt.2Mq5HX
2018-07-17 14:28:52.571413 7f255cd18e00  1 bluestore(/var/lib/ceph/tmp/mnt.2Mq5HX) mkfs already created
2018-07-17 14:28:52.571438 7f255cd18e00  1 bluestore(/var/lib/ceph/tmp/mnt.2Mq5HX) _fsck repair (shallow) start
2018-07-17 14:28:52.571524 7f255cd18e00  1 bdev create path /var/lib/ceph/tmp/mnt.2Mq5HX/block type kernel
2018-07-17 14:28:52.571546 7f255cd18e00  1 bdev(0x55df8e12ab40 /var/lib/ceph/tmp/mnt.2Mq5HX/block) open path /var/lib/ceph/tmp/mnt.2Mq5HX/block
2018-07-17 14:28:52.572017 7f255cd18e00  1 bdev(0x55df8e12ab40 /var/lib/ceph/tmp/mnt.2Mq5HX/block) open size 299333824512 (0x45b1afb000, 278 GB) block_size 4096 (4096 B) rotational
2018-07-17 14:28:52.572177 7f255cd18e00 -1 bluestore(/var/lib/ceph/tmp/mnt.2Mq5HX/block) _check_or_set_bdev_label bdev /var/lib/ceph/tmp/mnt.2Mq5HX/block fsid 73216be4-1fa5-41f9-bf67-25ec253c3939 does not match our fsid 2e8d4cea-e31f-4695-a112-6bb6f01953c4
2018-07-17 14:28:52.572204 7f255cd18e00  1 bdev(0x55df8e12ab40 /var/lib/ceph/tmp/mnt.2Mq5HX/block) close
2018-07-17 14:28:52.881498 7f255cd18e00 -1 bluestore(/var/lib/ceph/tmp/mnt.2Mq5HX) mkfs fsck found fatal error: (5) Input/output error
2018-07-17 14:28:52.881557 7f255cd18e00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error
2018-07-17 14:28:52.881739 7f255cd18e00 -1 [0;31m ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.2Mq5HX: (5) Input/output error[0m

Ceph global config:

Code:
[global]
     auth client required = none
     auth cluster required = none
     auth service required = none
     cluster network = 10.10.1.0/24
     debug asok = 0/0
     debug auth = 0/0
     debug buffer = 0/0
     debug client = 0/0
     debug context = 0/0
     debug crush = 0/0
     debug filer = 0/0
     debug filestore = 0/0
     debug finisher = 0/0
     debug heartbeatmap = 0/0
     debug journal = 0/0
     debug journaler = 0/0
     debug lockdep = 0/0
     debug mds = 0/0
     debug mds balancer = 0/0
     debug mds locker = 0/0
     debug mds log = 0/0
     debug mds log expire = 0/0
     debug mds migrator = 0/0
     debug mon = 0/0
     debug monc = 0/0
     debug ms = 0/0
     debug objclass = 0/0
     debug objectcacher = 0/0
     debug objecter = 0/0
     debug optracker = 0/0
     debug osd = 0/0
     debug paxos = 0/0
     debug perfcounter = 0/0
     debug rados = 0/0
     debug rbd = 0/0
     debug rgw = 0/0
     debug throttle = 0/0
     debug timer = 0/0
     debug tp = 0/0
     fsid = 54da8900-a9db-4a57-923c-a62dbec8c82a
     keyring = /etc/pve/priv/$cluster.$name.keyring
     mon allow pool delete = true
     osd journal size = 5120
     osd pool default min size = 2
     osd pool default size = 3
     public network = 10.10.1.0/24

[mds]
     keyring = /var/lib/ceph/mds/54da8900-a9db-4a57-923c-a62dbec8c82a/keyring
     mds data = /var/lib/ceph/mds/54da8900-a9db-4a57-923c-a62dbec8c82a

[mds.VMHost2]
     host = VMHost2

[mds.VMHost4]
     host = VMHost4

[mds.VMHost3]
     host = VMHost3

[osd]
     keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.VMHost4]
     host = VMHost4
     mon addr = 10.10.1.14:6789

[mon.VMHost3]
     host = VMHost3
     mon addr = 10.10.1.13:6789

[mon.VMHost2]
     host = VMHost2
     mon addr = 10.10.1.16:6789

Ceph Crush Map:

Code:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class hdd
device 7 osd.7 class hdd
device 8 osd.8 class hdd
device 9 osd.9 class hdd
device 10 osd.10 class hdd
device 11 osd.11 class hdd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host VMHost2 {
    id -3        # do not change unnecessarily
    id -4 class hdd        # do not change unnecessarily
    id -9 class ssd        # do not change unnecessarily
    # weight 14.556
    alg straw2
    hash 0    # rjenkins1
    item osd.0 weight 3.639
    item osd.1 weight 3.639
    item osd.6 weight 3.639
    item osd.9 weight 3.639
}
host VMHost3 {
    id -5        # do not change unnecessarily
    id -6 class hdd        # do not change unnecessarily
    id -10 class ssd        # do not change unnecessarily
    # weight 14.556
    alg straw2
    hash 0    # rjenkins1
    item osd.2 weight 3.639
    item osd.3 weight 3.639
    item osd.8 weight 3.639
    item osd.11 weight 3.639
}
host VMHost4 {
    id -7        # do not change unnecessarily
    id -8 class hdd        # do not change unnecessarily
    id -11 class ssd        # do not change unnecessarily
    # weight 14.556
    alg straw2
    hash 0    # rjenkins1
    item osd.4 weight 3.639
    item osd.5 weight 3.639
    item osd.7 weight 3.639
    item osd.10 weight 3.639
}
root default {
    id -1        # do not change unnecessarily
    id -2 class hdd        # do not change unnecessarily
    id -12 class ssd        # do not change unnecessarily
    # weight 43.669
    alg straw2
    hash 0    # rjenkins1
    item VMHost2 weight 14.556
    item VMHost3 weight 14.556
    item VMHost4 weight 14.556
}

# rules
rule replicated_hdd {
    id 1
    type replicated
    min_size 1
    max_size 10
    step take default class hdd
    step chooseleaf firstn 0 type host
    step emit
}
rule replicated_ssd {
    id 2
    type replicated
    min_size 1
    max_size 10
    step take default class ssd
    step chooseleaf firstn 0 type host
    step emit
}

# end crush map
 
I also ran into not being able to create bluestore OSDs today. About two days ago a created a test cluster and everything worked normally. Today when I created a test cluster I was unable to create OSDs, either by CLI or GUI.

Eventually I tried creating a filestore OSD and that worked. Next I destroyed the filestore OSD, cleaned the disk, and attempted to create a bluestore OSD on that same disk. The blusestore OSD creation failed again. Creating a filestore OSD worked, as did creating other filestore OSDs on other disks.
 
I did forget to mention that I properly removed the OSD as in the instructions on that link:

$ ceph osd out <ID>

$ service ceph stop osd.<ID>
$ ceph osd crush remove osd.<ID>
$ ceph auth del osd.<ID>
$ ceph osd rm <ID>

Then I deleted the partitions.

That did not work.

What did eventually work is to write zeroes to the disk:

Code:
sudo dd if=/dev/zero of=/dev/sdX bs=1M

Thank you so much for your replies, as it gave me a clue as to what was the issue.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!