Need advice on ssd setup for ceph

brucexx

Renowned Member
Mar 19, 2015
251
9
83
I am planing to get 8 x PX05SMB160 ssd drives and spread them in 4 ceph servers , two per server. The drives are a decent SAS 1.6TB drives: read 1900MiB/s, write 850 MiB/s, 270000 IOPS read and 100000 IOPS write , DWPD 10.

I am currently using 13K SAS spinners with 6 OSDs per server (3 servers, will be adding one more server), we also have 10 Gbps bonded network with at least 2 x 10 Gbps ports for public and private ceph networking. I want to create two pools with remain 13K spinners and add SSDs but I am wondering if 2 OSDs per server is not enough to achieve significant increase in performance. Anybody has any experience with low number of OSDs per server ?

Any advise appreciated.

Thx
 
I am planing to get 8 x PX05SMB160 ssd drives and spread them in 4 ceph servers , two per server. The drives are a decent SAS 1.6TB drives: read 1900MiB/s, write 850 MiB/s, 270000 IOPS read and 100000 IOPS write , DWPD 10.

I am currently using 13K SAS spinners with 6 OSDs per server (3 servers, will be adding one more server), we also have 10 Gbps bonded network with at least 2 x 10 Gbps ports for public and private ceph networking. I want to create two pools with remain 13K spinners and add SSDs but I am wondering if 2 OSDs per server is not enough to achieve significant increase in performance. Anybody has any experience with low number of OSDs per server ?

Any advise appreciated.

Thx

What will the use case be for the SSD Pool?

You will get more performance out of the SSDs pool than the SAS spinners pool even with a slightly lower OSD count, amount of OSD per a server really does not have that great effect, what is important is each OSD server still has plenty of CPU & RAM available for the OS + 2 SSD OSDs.
 
The use is just to accommodate more systems that have a need for more intense disk operations. Majority of our systems (linux) are almost idle but we have some heavy users, I kept them on local to Proxmoxes drives but want to move them to ceph. We are ustlizing now for ceph only 25% of the link speed so we have move network capacity.
 
The use is just to accommodate more systems that have a need for more intense disk operations. Majority of our systems (linux) are almost idle but we have some heavy users, I kept them on local to Proxmoxes drives but want to move them to ceph. We are ustlizing now for ceph only 25% of the link speed so we have move network capacity.

Then you should have no issues, when comes to CEPH it is more about having enough nodes to split the replication across than amount of OSD per a node. As you said your have them across 4 servers if your using standard replication of 3 you can lose a node and still have a spare node to rebuild the 3rd replication.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!