The disk situation on my server, how to set up Ceph reasonably?

zhousp666

New Member
Jan 6, 2024
14
0
1
I have 5 servers, each equipped with 4 NVMe drives of 3.2TB and 24 SATA drives of 8TB. This is my first time setting up a Ceph cluster. How should I best allocate and create Ceph? For instance, how should I set up DB and WAL disks, as well as the configuration of pools?
These are all brand-new servers, with Proxmox VE 8.1 systems just installed.

I tried using individual SATA drives as storage for virtual machines within Proxmox VE, but the read/write speeds are only around 100MB/s. Is there a way to optimize the read/write speeds? Considering that multiple virtual machines will be installed for employee use in the future, I'm concerned that this might affect efficiency.


Snipaste_2024-04-08_11-29-53.png

Snipaste_2024-04-08_15-14-02.png
 
The screenshot only shows 2 NVMe.

If you really have 4 use them as DB/WAL device for the HDDs and an additional OSD. If possible use the NVMe controller to create 2 namespaces on each NVMe. Otherwise use LVM. Make the DB/WAL volume 70G for each HDD, 6 of these on each NVMe, use the rest for an NVMe-OSD.
 
The screenshot only shows 2 NVMe.

If you really have 4 use them as DB/WAL device for the HDDs and an additional OSD. If possible use the NVMe controller to create 2 namespaces on each NVMe. Otherwise use LVM. Make the DB/WAL volume 70G for each HDD, 6 of these on each NVMe, use the rest for an NVMe-OSD.
Can I split one NVMe into two partitions for DB/WAL, each SATA drive uses the partition of this NVMe as OSD.
The remaining 3 NVMe drives are used separately.

If allocated in this way, what should be the proportion of the NVMe disk size of 3.2T used as the partition size for DB/WAL?