Is it worth making OSD for Ceph from large-volume disks?

emptness

Member
Aug 19, 2022
121
8
23
Hi!
I found information on forums on this issue.
Some believe that the use of large-volume disks for Ceph is highly undesirable. For example, 20 TB each.
Can anyone share information on this issue?
We are building a cluster of powerful servers (intel Xeon 5320, 512 GB RAM) with a 100 Gbit network. Is this fact critical for us or not?
 
I assume you mean 20TB HDDs?

In that case you won't get acceptable performance for almost anything, especially VM/CT workloads, by having just a few of those.

What kind of setup are you planning?
Which disks are you currently looking at?
How many disks per node?
How many nodes?
 
I assume you mean 20TB HDDs?

In that case you won't get acceptable performance for almost anything, especially VM/CT workloads, by having just a few of those.

What kind of setup are you planning?
Which disks are you currently looking at?
How many disks per node?
How many nodes?
Our configuration is
2xIntel Xeon Gold 6336Y,
512 GB RAM,
NIC dual 100Gbit/s,
2xSSD 480 GB OS,
4xHDD 20GB pool1,
3xSamsung SSD 7.68TB SAS3 12Gbit/s, TLC, 2100/2000 MB/s, 400k/90k IOPS, 1DWPD MZILT7T6HALA pool2,
2xMicron NVMe SSD 6.4TB TLC, 6200/3500 MB/s, cache for pool1.
4 servers.