[SOLVED] Add WAL/DB to CEPH OSD after installation.

forgot to mention - ceph-volume also now offers this ability as well.

https://docs.ceph.com/en/latest/ceph-volume/lvm/migrate/
So based in this docs from Ceph, I had used this:
- First set the osd to noout
ceph osd set-group noout osd
- Followed to stop it
systemctl stop ceph-osd@0
- Create lvm
ceph-volume lvm create --data /dev/sdc
- Check the lvm created
ceph-volume lvm list
- Create the new-db
ceph-volume lvm new-db --osd-id 0 --osd-fsid d1a7e434-53b0-4454-9060-851ae8ebe785 --target ceph-f8683677-0651-4d19-8e19-83aff7c
1cf07/osd-block-189361f7-7b82-42c0-bbac-833dd5b2a5454

Works fine.

Now do the same to the new-wal

ceph-volume lvm create --data /dev/sdd
ceph-volume lvm list
ceph-volume lvm new-wal --osd-id 0 --osd-fsid d1a7e434-53b0-4454-9060-851ae8ebe785 --target ceph-e8798be2-c799-4c31-b067-

After that I got:
1722369877345.png

Everything seems to be ok, but after start the osd unset the noout, I see this HEALTH_WARN:
HEALTH_WARN: 1 OSD(s) experiencing BlueFS spillover
osd.0 spilled over 768 KiB metadata from 'db' device (22 MiB used of 280 GiB) to slow device

Seems this is just a LAB and I am using Proxmox over Proxmox, with nested_virt, I am assuming this is something to do with the devices which is pretty much fake of pseudo devices.
In a real scenario with real devices this will not gonna happen, I suppose.

That's it.
 
Ok.
Turns out I needs to do it in two process:
1 - in order to create the separate DB and add it to the OSD, I used the script provided by @mitcHELLspawn . Thanks a lot
So I did this:
vgcreate cephdb /dev/sdc
add-db-to-osd.sh -b 280G -d /dev/sdc -o 0
Works as charm.
2 - in order to create the separate WAL and add it to the OSD, I did the follows:
- set the osd to noout
- stop the osd-0
- create a lvm vg named cephwal: vgcreate cephwal /dev/sdd
- create a lvm lv named cephwal1: lvcreate -l 100%FREE -n cephwal1 cephwal
- create a new-wal
- ceph-volume lvm new-wal --osd-id 0 --osd-fsid OSD-FSID --target cephwal/cephwal1

And that's it!
No more HEALTH_WARN: 1 OSD(s) experiencing BlueFS spillover!!!

Thanks a lot guys for the tips.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!