Extend LVM of Ceph DB/WAL Disk

mihanson

Well-Known Member
Nov 1, 2018
36
3
48
49
I have a 3 node Ceph cluster running Proxmox 7.2. Each node has 4 x HDD OSDs and the 4 OSDs share an Intel Enterprise SSD for the Ceph OSD database (DB/WAL) on each node. I am going to be adding a 5th OSD HDD to each node and also add an additional Intel Enterprise SSD on each node for use with the Ceph OSD database. Is it advisable to extend the LVM of the first Ceph OSD DB to include the new SSD, creating one large SSD-backed Ceph OSD DB consisting of 2 x SSD? Is it safer to have the two Ceph OSD DB drives as two discreet LVMs that do not share resources? My gut says to keep them separate, but I wanted to check to see if there is an advantage to extending the existing OSD DB LVM.

Thank you!
Mike
 
You have to keep in mind that if the SSD with the Ceph DB fails, then all the OSDs managed by it also fail. So in order to keep the number of failed OSDs to a minimum, you should use several DB-disks and keep the number of OSDs managed per DB-disk low (I think recommended was max of 4-5 per DB-disk).

Also extending the DB LVM to a second disk increases the risk of failure, since now the DB will be unavailiable if any of those two disks fail.

So I would not extend the LVM but add a second DB.
 
Also extending the DB LVM to a second disk increases the risk of failure, since now the DB will be unavailiable if any of those two disks fail.
This was my "gut" thinking as well. If you lose one drive in a DB LVM, you lose the entire DB and hence, you lose all the data on the OSD drives.