Problem creating Ceph OSD

Apr 2, 2018
15
1
43
I have a cluster o 8 nodes and when I try to create a Ceph OSD it fails.

The proxmox version is:

pveversion
pve-manager/5.1-51/96be5354 (running kernel: 4.13.16-2-pve)

The ceph version is:

dpkg -l | grep ceph
ii ceph 12.2.4-pve1 amd64 distributed storage and file system
ii ceph-base 12.2.4-pve1 amd64 common ceph daemon libraries and management tools
ii ceph-common 12.2.4-pve1 amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-mgr 12.2.4-pve1 amd64 manager for the ceph distributed storage system
ii ceph-mon 12.2.4-pve1 amd64 monitor server for the ceph storage system
ii ceph-osd 12.2.4-pve1 amd64 OSD server for the ceph storage system
ii libcephfs2 12.2.4-pve1 amd64 Ceph distributed file system client library
ii python-cephfs 12.2.4-pve1 amd64 Python 2 libraries for the Ceph libcephfs library

When I try to create a osd I get this:

pveceph createosd /dev/sdb

command '/sbin/zpool list -HPLv' failed: open3: exec of /sbin/zpool list -HPLv failed: No such file or directory at /usr/share/perl5/PVE/Tools.pm line 432.

create OSD on /dev/sdb (bluestore)

***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory.
***************************************************************

GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
The operation has completed successfully.
meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6400 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0
data = bsize=4096 blocks=25600, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=864, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.

ceph -s out is this:

ceph -s
cluster:
id: f9f5a65c-2013-4ab2-912a-1a084cd5a58b
health: HEALTH_OK

services:
mon: 3 daemons, quorum node1-1,node1-3,node1-5
mgr: node1-5(active), standbys: node1-1
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs:

I tried installing zfsutils-linux but it didn't work.

Could anyone help me to solve this problem?

Thanks in advance!
 
Overwrite the first 200 MB with dd, it seems that there is some leftover.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!