Is it worth making OSD for Ceph from large-volume disks?

emptness

Member
Aug 19, 2022
105
6
18
Hi!
I found information on forums on this issue.
Some believe that the use of large-volume disks for Ceph is highly undesirable. For example, 20 TB each.
Can anyone share information on this issue?
We are building a cluster of powerful servers (intel Xeon 5320, 512 GB RAM) with a 100 Gbit network. Is this fact critical for us or not?
 
I assume you mean 20TB HDDs?

In that case you won't get acceptable performance for almost anything, especially VM/CT workloads, by having just a few of those.

What kind of setup are you planning?
Which disks are you currently looking at?
How many disks per node?
How many nodes?
 
I assume you mean 20TB HDDs?

In that case you won't get acceptable performance for almost anything, especially VM/CT workloads, by having just a few of those.

What kind of setup are you planning?
Which disks are you currently looking at?
How many disks per node?
How many nodes?
Our configuration is
2xIntel Xeon Gold 6336Y,
512 GB RAM,
NIC dual 100Gbit/s,
2xSSD 480 GB OS,
4xHDD 20GB pool1,
3xSamsung SSD 7.68TB SAS3 12Gbit/s, TLC, 2100/2000 MB/s, 400k/90k IOPS, 1DWPD MZILT7T6HALA pool2,
2xMicron NVMe SSD 6.4TB TLC, 6200/3500 MB/s, cache for pool1.
4 servers.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!