[SOLVED] ceph - fail to create multiple OSD per drive because the requested extent is too large

Larion

New Member
Mar 6, 2021
15
2
3
3 node cluster with ceph v15.2.8
3x optane
3x micron 9300w
when I create 2 or more OSD per micron using "lvm batch" i get an error.
stderr: Volume group "ceph-38ac9489-b865-40cd-bae3-8f80cdaa556a" has insufficient free space (381544 extents): 381545 required.

No issues on optane. Error depends on the hardware used.
This error has been confirmed in the ceph upstream here
https://tracker.ceph.com/issues/47758
and a proposed fix was tested and included in the github master.
https://github.com/ceph/ceph/pull/38687/files


When will this fix be merged to proxmox ceph packages ?
Are the ceph packages in the proxmox ceph repo different than the "official" ceph packages ?


P.S.: I know that the proxmox gui does not support the creation of more than 1 OSD per drive and I know that the proxmox team believes that there are not enough performance benefits to put more than one OSD on a drive. But in my testing I got about 20% higher random write performance (radosbench 4k 8threads and above) with 2 OSD per drive on optane. (sequential read/write was unaffected because im maxing out my network)
This is the case with other types of nvme ssd too, because the osd-daemon is limiting the random performance.
 
Last edited:
Besides waiting until this patch becomes available in the PVE Ceph repo, have you considered splitting up the Microns into multiple namespaces? These will show up as separate disks, (nvme0n1, nvme0n2, ....)
 
Thanks for the tip with the namespaces.

Are the ceph packages in the proxmox ceph repo different than the "official" ceph packages ?