Hello community!
We have deployed our first small Proxmox cluster along with Ceph and so far we've had great experience with it. We're running traditional VM workload (most VMs are idling and most of the Ceph workload comes from bursts of small files with the exception of few SQL servers that can create some bigger transactions during peak business hours).
We're thinking about adding two more nodes to our three node cluster. The HW is almost the same, with the exception of NVMe drives which are bigger.
On the current three-node cluster we have two OSDs created per each NVMe (3 NVMes per node, 18 OSDs total). The additional NVMes would be 2-3 times the size of the current NVMe.
I'd like to hear your feedback and experience on whether or not we should continue setting up multiple OSDs for the drives and also how many OSDs per each drive (especially in terms of CPU and RAM overhead).
Since we have 2 OSDs per drive currently and the new drives will be ~2.5 times the size of current NVMes, we were thinking about setting up 5 OSDs for each of the new drive. Our concern is this could eat up most of the RAM and CPU power so loading the cluster with more VMs will lead to the resources being fought over.
Current setup for reference:
72 x Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz (2 Sockets), 512GB of RAM, 3x Intel P4610 3.2TB
Thank you very much, looking forward to reading any of your replies and have a nice rest of the day!
We have deployed our first small Proxmox cluster along with Ceph and so far we've had great experience with it. We're running traditional VM workload (most VMs are idling and most of the Ceph workload comes from bursts of small files with the exception of few SQL servers that can create some bigger transactions during peak business hours).
We're thinking about adding two more nodes to our three node cluster. The HW is almost the same, with the exception of NVMe drives which are bigger.
On the current three-node cluster we have two OSDs created per each NVMe (3 NVMes per node, 18 OSDs total). The additional NVMes would be 2-3 times the size of the current NVMe.
I'd like to hear your feedback and experience on whether or not we should continue setting up multiple OSDs for the drives and also how many OSDs per each drive (especially in terms of CPU and RAM overhead).
Since we have 2 OSDs per drive currently and the new drives will be ~2.5 times the size of current NVMes, we were thinking about setting up 5 OSDs for each of the new drive. Our concern is this could eat up most of the RAM and CPU power so loading the cluster with more VMs will lead to the resources being fought over.
Current setup for reference:
72 x Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz (2 Sockets), 512GB of RAM, 3x Intel P4610 3.2TB
Thank you very much, looking forward to reading any of your replies and have a nice rest of the day!