Ceph Planning

adamb

Well-Known Member
Mar 1, 2012
1,019
28
48
We need some cheap bulk data storage with snapshot functionality that can scale easily. The data residing on this storage will basically be written and forgotten about. It would be read minimally.

I was leaning towards a basic 3 node ceph cluster with either erasure coding or 2 copies. We would also have a identical ceph cluster in our other data center which we would replicate to. We have a 40GB dark fiber connection between the two data centers.

I am leaning towards the following in each node.

CPU: E5-1620
Ram: 64G
8x4TB Spinning rust
2xIntel DC 3710's for Journal's (Thinking 400G models)
2xIntel DC 3610 for OS
Redundant 10G network

Do you guys think the above hardware would be solid?

We already have a separate cluster with proxmox front ends, so the above nodes would be specifically for ceph and this bulk data.
 

adamb

Well-Known Member
Mar 1, 2012
1,019
28
48
for bluestore ceph storage... sata ssd as journal/wal/db are not fast enough. It is best to use nvme ssd.
.
keep in mind that firestore are phasing out
Yea but I am not really looking for high performance. I just need bulk storage with ok performance that can scale, has snapshot capabilities and is redundant.
 

sg90

Member
Sep 21, 2018
86
8
8
29
Im almost wondering if using SSD's for journals is overkill as well for my scenario.
If your just looking for archive storage and not high performance forget the SSD’s and just collocate the WAL/DB on the same hard disk as the OSD. Your remove your single / double point of failure as a single disk failure will only effect one OSD compared to multiple OSD’s.

With your requirements I doubt you would notice much difference in performance without a seperate WAL / DB disk.
 

adamb

Well-Known Member
Mar 1, 2012
1,019
28
48
If your just looking for archive storage and not high performance forget the SSD’s and just collocate the WAL/DB on the same hard disk as the OSD. Your remove your single / double point of failure as a single disk failure will only effect one OSD compared to multiple OSD’s.

With your requirements I doubt you would notice much difference in performance without a seperate WAL / DB disk.
That is the same conclusion I have come to as well. Looks like its time to start testing. Thanks for the input!
 
Jul 31, 2018
80
7
8
57
You don't need the ssd. I have lab setup with spinning disks and not even E5 just some 4core E5620 32gb memory it works fine . Slow but fine and it's using bluestore
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!