I messed around with creating new classes and rules and such which works great. But in the Host >> Disk >> Usage it's not showing the correct usage for the nvme drives:
Does this NVMe hold the DB+WAL for the other OSDs? Or was it created as a separate OSD?
Also, since Ceph Nautilus, the ceph-disk utility used for the OSD creation does not exist anymore. Instead ceph-volume is used and that create LVM based OSDs.
Does this NVMe hold the DB+WAL for the other OSDs? Or was it created as a separate OSD?
Also, since Ceph Nautilus, the ceph-disk utility used for the OSD creation does not exist anymore. Instead ceph-volume is used and that create LVM based OSDs.
There are 6 nvme OSDs that were created, then afterwards using the cli the class was set to nvme, after they were initially created as SSD by defauled so unset and set those 6.
There are 3 however that I added and were defaulted to blank, as I disabled update per some guides I found. Those I just needed to set the class. They show as LVM by disk, even though the pool is RBD(PVE) for the nvme class.
The usage is just an indication of use, that our code tries to provide. Depending on setup this might just lead to 'LVM'. But it doesn't have an impact on the OSDs itself.
Hi
I have same issue,
Newly OSD for /dev/sdc became also LVM and is named "Ceph osd.xx (BLuestore)"
But only /dev/nvme0n1 is named "LVM"
So its clearly possible to fix this bug ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.