here my final script which seems to work well and keeps the old ID's intact
Code:
#!/usr/bin/env bash
ids=($1)
function migrate {
echo migrating OSD $1;
ID=$1
re='^[0-9]+$'
if ! [[ $ID =~ $re ]] ; then
echo "error: OSD Id is needed" >&2; exit 1
fi
# get the device name from ID of OSD
DEVICE=$(mount | grep /var/lib/ceph/osd/ceph-$ID | grep -o '\/dev\/[a-z.-]*')
echo Device: $DEVICE
# check if the drive needs to be converted
if ceph osd metadata $ID | grep osd_objectstore | grep 'filestore' ; then
echo filestore found - converting
else
echo bluestore found... exiting
# return
fi
ceph osd count-metadata osd_objectstore
ceph osd out $ID
while ! ceph osd safe-to-destroy $ID; do sleep 10; done
echo Destroying OSD $ID $DEVICE in 5 seconds... hit ctrl-c top abort
sleep 5
echo stopping..
systemctl stop ceph-osd@$ID
echo destroying...
systemctl kill ceph-osd@$ID
sleep 5
umount /var/lib/ceph/osd/ceph-$ID
sleep 5
echo zap disk
ceph-disk zap $DEVICE
sleep 5
echo osd remove
ceph osd destroy $ID --yes-i-really-mean-it
echo prepare bluestore on $DEVICE with id $ID
sleep 5
ceph-disk prepare --bluestore $DEVICE --osd-id $ID --block.wal /dev/nvme0n1 --block.db /dev/nvme0n1
echo finished converting OSD.$ID
}
# parse command line parameters pass drive ids like ./blue "1 2 3 4 5"
for i in "${ids[@]}"
do
:
migrate $i
done
Which give me the following partion table on my /dev/mvme01n1
Code:
parted /dev/nvme0n1
GNU Parted 3.2
Using /dev/nvme0n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Unknown (unknown)
Disk /dev/nvme0n1: 400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 1075MB 1074MB ceph block.db
2 1075MB 1679MB 604MB ceph block.wal
3 1679MB 2753MB 1074MB ceph block.db
4 2753MB 3356MB 604MB ceph block.wal
5 3356MB 4430MB 1074MB ceph block.db
6 4430MB 5034MB 604MB ceph block.wal
7 5034MB 6108MB 1074MB ceph block.db
8 6108MB 6712MB 604MB ceph block.wal
9 6712MB 7786MB 1074MB ceph block.db
10 7786MB 8390MB 604MB ceph block.wal
11 8390MB 9463MB 1074MB ceph block.db
12 9463MB 10.1GB 604MB ceph block.wal
13 10.1GB 11.1GB 1074MB ceph block.db
14 11.1GB 11.7GB 604MB ceph block.wal
15 11.7GB 12.8GB 1074MB ceph block.db
16 12.8GB 13.4GB 604MB ceph block.wal
17 13.4GB 14.5GB 1074MB ceph block.db
18 14.5GB 15.1GB 604MB ceph block.wal
19 15.1GB 16.2GB 1074MB ceph block.db
20 16.2GB 16.8GB 604MB ceph block.wal
21 16.8GB 17.9GB 1074MB ceph block.db
22 17.9GB 18.5GB 604MB ceph block.wal
do you think those partions which got automatically created are big enough? seems like in the documentation those standards should suit. Somehow (compared to filestore setup) it needs much less data on the drive than expected which keeps a lot for my new OSDPool. this will be partition 23 than...
as soon as i find out on how todo this, i gonna upgrade this thread. Any recommendations are welcome ! Hope this help some other people who might run into the same situation adding nvme's to an existing setup
tip: if you ceph osd out x y z all your osd's on one host and wait until it is backfilled to your other hosts before starting this script it just take some minutes.. otherwise it will mark out each disk, wait until no pgs are left on it before it destroys them. it should be safe running it disk by disk but gonna take ages !