Hello,
We upgrade from proxmox 4.4 and from hammer to luminous with success.
My ceph cluster is health.
I'm using ssd for journal
I try to migrate one osd to bluestore =>
- out
- stop
- wait for building up
- destroy
- create new osd
=> osd created but with my wal size was only 1G
I delete it again
in cli, i deleted partition on the disque and on the journal disk.
I create again the osd but in filestore mode => i want have the time to really understand how tuns bluestore
Creation is good : no error
I can see the new parition on each disque but i don't see new osd in /var/lib/ceph/osd/
(i should see a new osd.0)
Why ?
Nicolas
We upgrade from proxmox 4.4 and from hammer to luminous with success.
My ceph cluster is health.
I'm using ssd for journal
I try to migrate one osd to bluestore =>
- out
- stop
- wait for building up
- destroy
- create new osd
=> osd created but with my wal size was only 1G
I delete it again
in cli, i deleted partition on the disque and on the journal disk.
I create again the osd but in filestore mode => i want have the time to really understand how tuns bluestore
Creation is good : no error
Code:
Server View
Logs
()
create OSD on /dev/sdb (xfs)
using device '/dev/nvme0n1' for journal
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.
****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
Setting name!
partNum is 4
REALLY setting name!
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=244188597 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0
data = bsize=4096 blocks=976754385, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=476930, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
The operation has completed successfully.
TASK OK
I can see the new parition on each disque but i don't see new osd in /var/lib/ceph/osd/
(i should see a new osd.0)
Why ?
Nicolas