Hi
We'll be evaluating proxmox & ceph over the coming weeks and want to ensure we have a good starting point for benchmarking. We've been running a hyperconverged all-flash platform for about 7 years but it's not based on ceph. We're reading heaps trying to understand the best deployment model.
The tuning guide for all-flash deployments on the ceph.com site states that running a single OSD per physical NVMe device cannot take advantage of the performance available. We will be running 100% NVMe devices for storage (2TB drives) so this is important to us. That article was posted over 2 years ago so I'm wondering if it's still valid with the improvements to ceph?
The article recommends running 4 OSDs per device. If that's the best configuration I assume we'll have to set that up manually as I haven't seen any way to define an OSD through the GUI that doesn't reference the entire disk. Also, it looks like Ceph uses 2 partitions per OSD (metadata and storage). If we need to create 8 partitions to support 4 OSDs is there a defined size ratio between metadata and storage partitions?
Any feedback on getting the most out of an all NVMe platform would be appreciated.
Thanks
David
...
We'll be evaluating proxmox & ceph over the coming weeks and want to ensure we have a good starting point for benchmarking. We've been running a hyperconverged all-flash platform for about 7 years but it's not based on ceph. We're reading heaps trying to understand the best deployment model.
The tuning guide for all-flash deployments on the ceph.com site states that running a single OSD per physical NVMe device cannot take advantage of the performance available. We will be running 100% NVMe devices for storage (2TB drives) so this is important to us. That article was posted over 2 years ago so I'm wondering if it's still valid with the improvements to ceph?
The article recommends running 4 OSDs per device. If that's the best configuration I assume we'll have to set that up manually as I haven't seen any way to define an OSD through the GUI that doesn't reference the entire disk. Also, it looks like Ceph uses 2 partitions per OSD (metadata and storage). If we need to create 8 partitions to support 4 OSDs is there a defined size ratio between metadata and storage partitions?
Any feedback on getting the most out of an all NVMe platform would be appreciated.
Thanks
David
...