I have had a Proxmox Cluster ( 9 Nodes, Dell R730's ) with 10GB network dedicated to CEPH backend, 10GB for internal traffic.
I have a combination of machines with 3.5 Inch Bays and 2.5 Inch Bays, and each machine also has an NVME Drive ( 2GB Samsung 980 Pro ), and I put a 4TB Samsung SSD as the boot drive.
I had initially seen that Samsung Pro drives are reasonable for a CEPH cluster, this is more than home use, but less than 100% enterprise (it's for my research lab, so if something goes down I am annoyed, but the world doesn't end).
I initially created 3 POOLS--- one was 1TB of NVME x 9 machines, the second was all standard SSD drives. ( Samsung Drives, mostly 1 or 2 TB Drives), and finally on the 3.5" machines, I had Enterprise SATA Drives ( 6TB Drives).
For the NON-NVME drives, I put the WAL and DB on a separate NVME partition I had set up. After more reading, it became clearer that I really need to have Enterprise SSD's with Supercapacitors, so I am now in the process of adding them in.
Where I am getting a bit confused, is how best to integrate/deploy them. At least using the ceph osd tell.* bench, before I started adding the new Enterprise Drives ( Samsung PM953 960GB Drives) , when I ran
ceph tell osd.* bench , the IOPS for each devices were .. underwhelming, although apparently not shocking. The slow spinning disk were 30 IOPS, the SATA SSD's were about 100 IOPS, and surprisingly the NVME drives were only 200 - 300 IOPS. Since I was using the NVME for the DB/WAL disk, I am a bit confused why the IOPS is still so slow.
NOW --- I just installed the first Enterprise SSD drive, and when I ran the ceph osd.<newdevice> bench, it was still only listing 500 IOPS on those devices. I also did a test on a 1TB SATA Hard Drive I had stuck in a machine (spinning disk). when I ran a benchmark after creating an OSD... I got ~ 20 IOPS. I then deleted the OSD, and put the WAL on the enterprise samsung drive... and got 21 IOPS. I then deleted the OSD, and put the DB and the WAL on the Enterprise drive and rebenchmarked things... and still got 20 IOPS..
So I am a bit at a loss.. All I can conclude is I am not running the correct benchmarks and/or configuring something horribly wrong...
My end state would be the following:
1). Integrate 6 or 12 TB Enterprise SATA Drives on some nodes-
--> Where / how should I configure these in terms of WAL and/or DB... should at least the WAL be on an Enterprise NVME DRive?
2). I have at least 20 1 TB Samsung Non-Enterprise SSD's.... maybe just use these in a ZFS pool or something and not bother with CEPH?
3). I have 9 x 2 TB Samsung 980 NVME ( not enterprise) .... If I put the WAL disk on an enterprise Drive, would these be useful in the mix?



I have a combination of machines with 3.5 Inch Bays and 2.5 Inch Bays, and each machine also has an NVME Drive ( 2GB Samsung 980 Pro ), and I put a 4TB Samsung SSD as the boot drive.
I had initially seen that Samsung Pro drives are reasonable for a CEPH cluster, this is more than home use, but less than 100% enterprise (it's for my research lab, so if something goes down I am annoyed, but the world doesn't end).
I initially created 3 POOLS--- one was 1TB of NVME x 9 machines, the second was all standard SSD drives. ( Samsung Drives, mostly 1 or 2 TB Drives), and finally on the 3.5" machines, I had Enterprise SATA Drives ( 6TB Drives).
For the NON-NVME drives, I put the WAL and DB on a separate NVME partition I had set up. After more reading, it became clearer that I really need to have Enterprise SSD's with Supercapacitors, so I am now in the process of adding them in.
Where I am getting a bit confused, is how best to integrate/deploy them. At least using the ceph osd tell.* bench, before I started adding the new Enterprise Drives ( Samsung PM953 960GB Drives) , when I ran
ceph tell osd.* bench , the IOPS for each devices were .. underwhelming, although apparently not shocking. The slow spinning disk were 30 IOPS, the SATA SSD's were about 100 IOPS, and surprisingly the NVME drives were only 200 - 300 IOPS. Since I was using the NVME for the DB/WAL disk, I am a bit confused why the IOPS is still so slow.
NOW --- I just installed the first Enterprise SSD drive, and when I ran the ceph osd.<newdevice> bench, it was still only listing 500 IOPS on those devices. I also did a test on a 1TB SATA Hard Drive I had stuck in a machine (spinning disk). when I ran a benchmark after creating an OSD... I got ~ 20 IOPS. I then deleted the OSD, and put the WAL on the enterprise samsung drive... and got 21 IOPS. I then deleted the OSD, and put the DB and the WAL on the Enterprise drive and rebenchmarked things... and still got 20 IOPS..
So I am a bit at a loss.. All I can conclude is I am not running the correct benchmarks and/or configuring something horribly wrong...
My end state would be the following:
1). Integrate 6 or 12 TB Enterprise SATA Drives on some nodes-
--> Where / how should I configure these in terms of WAL and/or DB... should at least the WAL be on an Enterprise NVME DRive?
2). I have at least 20 1 TB Samsung Non-Enterprise SSD's.... maybe just use these in a ZFS pool or something and not bother with CEPH?
3). I have 9 x 2 TB Samsung 980 NVME ( not enterprise) .... If I put the WAL disk on an enterprise Drive, would these be useful in the mix?


