Ceph and local cache on SSD

zeuxprox

Renowned Member
Dec 10, 2014
89
5
73
Hello,


we are planning to create 2 clusters:
  • Cluster 1: Compute Nodes (Proxmox 5.2);
  • Cluster 2: Storage Nodes (last version of Ceph) with initially 32 TB of storage on HDDs and at least 4 TB on SSD for mixed workload.
In red some questions
Cluster 1


3 x Compute Node
  • CPU: 2 x Intel Xeon Gold 5120
  • RAM: 192 GB DDR4 ECC Registered
  • NIC 10Gb: 4 x 10 Gb SFP+ (2 Cards)
  • NIC 1 Gb: 4 x 1 Gb RJ45 (1 Card)
  • Storage (Boot disks): 2 x 64 GB SATA DOM in RAID 1 via ZFS
  • Cache SSD: 1 x 1 TB (any advice about model and type?)


Cluster 2

3 x Ceph Monitors (MON):
  • CPU: 1 x Intel Xeon Silver 4110 OR 1 x Intel Xeon E5-1650V4 - which is better for Ceph MON?
  • RAM: 64 GB DDR4 ECC Registered
  • NIC 10Gb: 2 x 10 Gb SFP+ (2 Cards)
  • NIC 1 Gb: 4 x 1 Gb RJ45 (1 Card)
  • Storage (Boot disks): 2 x 32 GB SATA DOM in RAID 1 via ZFS
3 x Ceph OSD node:
  • CPU: 1 x Intel Xeon Silver 4110 OR 1 x Intel Xeon E5-1650V4 - which is better for Ceph OSD node?
  • RAM: 96 GB DDR4 ECC Registered
  • Controller HBA: 2 x AOC-S3008L-L8E (Supermicro) - is it good?
  • Storage (Boot disks): 2 x 128 GB SATA DOM in RAID 1 via ZFS
  • HDD 32 TB total: 4 x HGST 8 TB He10 4Kn Format (HUH721008AL4200) - any advice about HDDs?
  • SSD 4 TB total for mixed workload: any advice about number, model, type and size of SSD disks ?
  • NIC 10Gb: 4 x 10 Gb SFP+ (2 Cards)
  • NIC 1 Gb: 4 x 1 Gb RJ45 (1 Card)

Now I would like, if it is possible, to cache write and read on the local SSD in the Proxmox Host of the Cluster 1 and then, automatically, the system commit the writes on the Ceph Cluster. In other word I would like a system like VirtuCache for VMware (http://virtunetsystems.com/host-side-caching-software-virtucache/virtucache-technology/). To guarantee consistency, can I replicate the writes on Local SSD cache on a different Proxmox Server?

Thank you very much
 
Hard to even address the question. Lets try it from the other direction:

1. how many virtual resources (vms, cts) will you need to host? how much resources do they require, and how much overprovisioning?
2. what is your desired fault tolerance at each failure domain? what is the minimum required performance for each virtual disk device?

Any "advice" you'll get before defining your workload will be wrong.
 
Hi,
initially we will have about 150/180 VM/CTS, some of which will be very write intensive (database). Cluster 2 (Ceph) will have a replica 3, so I can lose 2 nodes without lose data. As soon as possible I will add at least one node.

Thank you
 
sorry for warming up this old thread, but it is appropriate for my question.

I have the same problem. I wanted to use ssds to speed up the poor random read and write performance of HDD disks.
I've done some experimenting with Ceph.
1. Use nvme as the DB/WAL disk. The actual 4K random read-write performance tested was only a few hundred iops, which would be tens of thousands of IOPS in a VMware Vsan cluster
2. Deploy Cache Tiering according to the Ceph official documentation (maybe I'm not doing it right) . The actual results were even worse, with just over 100 Iops


(base) [root@localhost fio-cdm]# ./fio-cdm tests: 5, size: 1.0GiB, target: /root/fio-cdm 2.0GiB/35.0GiB |Name | Read(MB/s)| Write(MB/s)| |------------|------------|------------| |SEQ1M Q8 T1 | 101.03| 14.06| |SEQ1M Q1 T1 | 110.98| 11.40| |RND4K Q32T16| 3.27| 0.47| |. IOPS | 797.90| 115.13| |. latency us| 605658.39| 2807209.47| |RND4K Q1 T1 | 0.83| 0.08| |. IOPS | 203.36| 20.27| |. latency us| 4909.84| 49308.57|
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!