dell H700 raid card and osd does not appear in the UI as bluestore

danielc

Member
Feb 28, 2018
35
2
13
41
Hello.

We faced a problem that we are trying to create virtual disk in H700 and then build osd on it to our ceph cluster.

We found that, if we do not specificly build the OSD as filesotre instead of bluestore, the osd will not appear in the OSD UI.

Even we can use command ceph osd crush add osd.0 0 host=ceph1 to ask it back to UI, but it will shown as out with used 0 (down out). In and Start makes no difference.

However if we pveceph createosd /dev/sdb -bluestore 0 like this, the osd will appear instantly without any problem.(up in with Used 0.0.3)

Is this because the raid card affects the MBR and GPT in the virtual disk so it cannot function correctly ??

Is there anyone have this experience?
 
Last edited:
So if we run this command....

root@ceph1:~# pveceph createosd /dev/sdb
create OSD on /dev/sdb (bluestore)
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
The operation has completed successfully.
meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6400 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0
data = bsize=4096 blocks=25600, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=864, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
The operation has completed successfully.

Result in screen shots
 

Attachments

  • 1.png
    1.png
    26.6 KB · Views: 9
  • 2.png
    2.png
    22.7 KB · Views: 8
And if we....
root@ceph1:~# pveceph createosd /dev/sdc -bluestore 0
create OSD on /dev/sdc (xfs)
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
meta-data=/dev/sdc1 isize=2048 agcount=4, agsize=243826623 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0
data = bsize=4096 blocks=975306491, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=476223, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
The operation has completed successfully.
 

Attachments

  • 3.png
    3.png
    26.5 KB · Views: 7
And if we now add back the invisible osd.0 by ....
root@ceph1:~# ceph osd crush add osd.0 0 host=ceph1
add item id 0 name 'osd.0' weight 0 at location {host=ceph1} to crush map

One thing for sure...It seems cannot detect the OSD as bluestore...
How can we solve this?
Thanks
 

Attachments

  • 4.png
    4.png
    26.7 KB · Views: 6
Just solved this problem by clear the metadata....
dd if=/dev/zero of=/dev/sdb bs=1024 count=1M
pveceph createosd /dev/sdb

It works normal now.
This problem have solved.
 
  • Like
Reactions: cdoublejj

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!