3 node cluster with ceph -
3x optane
3x micron 9300
when creating 2 OSD per micron using "lvm batch" I get an error: (ceph v.15.2.8 and v15.2.9 tested)
stderr: Volume group "ceph-38ac9489-b865-40cd-bae3-8f80cdaa556a" has insufficient free space (381544 extents): 381545 required.
works with optane. error depends on hardware used.
error has been confirmed @ceph upstream
https://tracker.ceph.com/issues/47758
a fix has been tested and merged to github master
https://github.com/ceph/ceph/pull/38687/files
When will this fix be merged to proxmox ceph packages/repo ?
Are there differences between the ceph packages in the proxmox repo (currently using nosub) and the official github ?
P.S.: I know that the proxmox gui only supports the creation of 1 OSD per drive.
I also know that the proxmox team believes that there are not enough performance benefits with multiple OSDs per drive.
But my testing showed a 20% higher random write performance with 2 OSD (radosbench 4k 8thread and above) on optane.
Thats why I want to do it that way.
Perfomance is limited by osd-daemon, so other nvme ssds reach the same speed.
No benefits @ sequential writes/reads because im maxing out my network.
3x optane
3x micron 9300
when creating 2 OSD per micron using "lvm batch" I get an error: (ceph v.15.2.8 and v15.2.9 tested)
stderr: Volume group "ceph-38ac9489-b865-40cd-bae3-8f80cdaa556a" has insufficient free space (381544 extents): 381545 required.
works with optane. error depends on hardware used.
error has been confirmed @ceph upstream
https://tracker.ceph.com/issues/47758
a fix has been tested and merged to github master
https://github.com/ceph/ceph/pull/38687/files
When will this fix be merged to proxmox ceph packages/repo ?
Are there differences between the ceph packages in the proxmox repo (currently using nosub) and the official github ?
P.S.: I know that the proxmox gui only supports the creation of 1 OSD per drive.
I also know that the proxmox team believes that there are not enough performance benefits with multiple OSDs per drive.
But my testing showed a 20% higher random write performance with 2 OSD (radosbench 4k 8thread and above) on optane.
Thats why I want to do it that way.
Perfomance is limited by osd-daemon, so other nvme ssds reach the same speed.
No benefits @ sequential writes/reads because im maxing out my network.