[SOLVED] Unable to create OSD

GPLExpert

Active Member
Jul 18, 2016
24
0
41
France
gplexpert.com
Hello,

We upgrade from proxmox 4.4 and from hammer to luminous with success.

My ceph cluster is health.

I'm using ssd for journal

I try to migrate one osd to bluestore =>
- out
- stop
- wait for building up
- destroy
- create new osd

=> osd created but with my wal size was only 1G

I delete it again

in cli, i deleted partition on the disque and on the journal disk.

I create again the osd but in filestore mode => i want have the time to really understand how tuns bluestore

Creation is good : no error

Code:
Server View
Logs
()
create OSD on /dev/sdb (xfs)
using device '/dev/nvme0n1' for journal
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
Setting name!
partNum is 4
REALLY setting name!
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=244188597 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0, reflink=0
data     =                       bsize=4096   blocks=976754385, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=476930, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
The operation has completed successfully.
TASK OK

I can see the new parition on each disque but i don't see new osd in /var/lib/ceph/osd/
(i should see a new osd.0)

Why ?

Nicolas
 
You need to zero the first ~200 MB, as there are leftovers.

EDIT: maybe you need to remove orphan entries in ceph too.
 
same Problem here, i played around mit the new setup, and destroyed the OSDs and wanted to start wirh a clean setup...
now i cant add th OSDs - in the Cehp Status Page they are shown as down and out .... :-(
 
Ok i success to create again the osd.

Code:
$ ceph osd out <ID>

$ service ceph stop osd.<ID>
$ ceph osd crush remove osd.<ID>
$ ceph auth del osd.<ID>
$ ceph osd rm <ID>

I used deleted partition on my disk and used command :
Code:
ceph osd crush remove osd.<ID>

After, in gui, create the new osd.
 
Hello,
i can not add more then 14 OSDs to my cluster :-(
but i have a 4 Server Cluster with 8 SAS disks each- would like to use all of them...
have allready setmaxosd to 34


everything looks ok, but in no OSD appears...

does not work from gui and command line

any idea ?

otto



root@SR-PX-S-02:~# pveceph createosd /dev/sde

create OSD on /dev/sde (bluestore)

Creating new GPT entries.

GPT data structures destroyed! You may now partition the disk using fdisk or

other utilities.

Creating new GPT entries.

The operation has completed successfully.

Setting name!

partNum is 0

REALLY setting name!

The operation has completed successfully.

Setting name!

partNum is 1

REALLY setting name!

The operation has completed successfully.

The operation has completed successfully.

meta-data=/dev/sde1 isize=2048 agcount=4, agsize=6400 blks

= sectsz=512 attr=2, projid32bit=1

= crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0

data = bsize=4096 blocks=25600, imaxpct=25

= sunit=0 swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0 ftype=1

log =internal log bsize=4096 blocks=864, version=2

= sectsz=512 sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0

The operation has completed successfully.
 
Try my script, zeroes out first 2 mb of the drive..

Code:
Zap-disk.sh
lsblk
read -p "Enter /dev/HDD<name> to be zapped: " devname

ceph osd tree

read -p "Enter osd.<nr> to be zapped: " osdnr


echo "*** Running ...\tsystemctl stop ceph-osd@$osdnr"
systemctl stop ceph-osd@$osdnr

echo "*** Running ...\tumount /var/lib/ceph/osd/ceph-$osdnr"
umount /var/lib/ceph/osd/ceph-$osdnr

echo "*** Running ...\tdd if=/dev/zero of=/dev/$devname bs=1M count=2048"
dd if=/dev/zero of=/dev/$devname bs=1M count=2048


echo "*** Running ...\tsgdisk -Z /dev/$devname\n"
sgdisk -Z /dev/$devname


echo "*** Running ...\tsgdisk -g /dev/$devname\n"
sgdisk -g /dev/$devname


echo "*** Running ...\tpartprobe"
partprobe /dev/$devname


echo "*** Running ...\tceph-disk zap /dev/$devname\n"
ceph-disk zap /dev/$devname


echo "*** Running ...\tceph osd out $osdnr\n"
ceph osd out $osdnr


echo "*** Running ...\tceph osd crush remove osd.$osdnr\n"
ceph osd crush remove osd.$osdnr


echo "*** Running ...\tceph auth del osd.$osdnr\n"
ceph auth del osd.$osdnr


echo "*** Running ...\tceph osd rm $osdnr\n"
ceph osd rm $osdnr
 
  • Like
Reactions: skywavecomm

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!