Proxmox Ceph - 10K SAS vs Entry-level SSD

vRod

Renowned Member
Jan 11, 2012
36
2
73
Hi all,

I am currently using Ceph with the following constellation:

- 2x DL360 Gen9, 1x DL380 Gen9
- 1x Xeon E5-2690 v3
- 128GB DDR4 ECC (2133 mhz)
- 5 x Intel S4510 SSD as OSD
- 4x10Gbps Uplink with LACP

So, 3 nodes in total right now with a total of 15 OSD's.

I have been handed an "old" SAN with 24x 600GB 10K SAS HDD's. I also have been handed an additional DL380 Gen9 with a Xeon E5-2630 v3.

If I acquired 8 additional 600GB SAS drives and added it to the new host, resulting in a total of 32 OSD's, eventually with a single NVMe for WAL in each host - how would that run compared to my current setup with the 3 nodes and 15 SSD OSD's? At first the space I had with the SSD's was sufficient, however this is not the case anymore.

A second side question I have: I won't have a free slot for the boot OS drive in front, what are peoples opinion by using the internal USB 3.0 ports with a single intel S3700 SSD as boot drive?

Thank you in advance for all your advice!
 
I have been handed an "old" SAN with 24x 600GB 10K SAS HDD's. I also have been handed an additional DL380 Gen9 with a Xeon E5-2630 v3.
Don't do it. You will possibly run into latency issues. And likely storage congestion, since all writes will need to go through the SAN's disk controller.

If I acquired 8 additional 600GB SAS drives and added it to the new host, resulting in a total of 32 OSD's, eventually with a single NVMe for WAL in each host - how would that run compared to my current setup with the 3 nodes and 15 SSD OSD's? At first the space I had with the SSD's was sufficient, however this is not the case anymore.
Depends on your workload. You will need to test it.

A second side question I have: I won't have a free slot for the boot OS drive in front, what are peoples opinion by using the internal USB 3.0 ports with a single intel S3700 SSD as boot drive?
I'd never tried, but consider, that the Ceph MON will live on the OS partition as well. Same thing as above, latency will be key.
 
Hi all,

I am currently using Ceph with the following constellation:

- 2x DL360 Gen9, 1x DL380 Gen9
- 1x Xeon E5-2690 v3
- 128GB DDR4 ECC (2133 mhz)
- 5 x Intel S4510 SSD as OSD
- 4x10Gbps Uplink with LACP

So, 3 nodes in total right now with a total of 15 OSD's.

I have been handed an "old" SAN with 24x 600GB 10K SAS HDD's. I also have been handed an additional DL380 Gen9 with a Xeon E5-2630 v3.

If I acquired 8 additional 600GB SAS drives and added it to the new host, resulting in a total of 32 OSD's, eventually with a single NVMe for WAL in each host - how would that run compared to my current setup with the 3 nodes and 15 SSD OSD's? At first the space I had with the SSD's was sufficient, however this is not the case anymore.

A second side question I have: I won't have a free slot for the boot OS drive in front, what are peoples opinion by using the internal USB 3.0 ports with a single intel S3700 SSD as boot drive?

Thank you in advance for all your advice!

Running just the pure OS on the SSD via USB should be no issues (disable log's and everything else not required), however as Alwin said, if you also need to store the MON data on this USB disk then you may end up with issues.
However, if you can store your MON(s) on a separate server then no issues. There is a company that actually has a network OS boot for CEPH so no local OS storage is required in each server.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!