All NVME Cluster with Ceph

tjk

Member
May 3, 2021
112
14
23
I am working with a prospect to build out a new Proxmox cluster and looking at using Ceph with all NVME drives.

Is there guidance/best practices on this sort of setup, in terms of how much ram to plan for Ceph overhead per node, Ceph setup with all NVME, Ceph setup for EC vs replica's, etc?

Edit: Thinking of the following config per node:

10 x 3.84 NVME drives, 2x SSD OS/Boot, 2x10G Internet facing traffic, 2x10G dedicated storage network.
 
Last edited:
  • Like
Reactions: Bran-Ko
Sure, use case is heavy IO workloads, each VM will have 16-32G RAM and 4 to 8 cores assigned, 80% read, 20% write typically, planning on at least 5 nodes but growing to 20-30 nodes.

No spinning drives, all 3.84T NVME drives.

What other info will help?

Edit - Workloads are pretty critical and HA is critical. I build these out all day long using HA NFS, this would be our first Ceph cluster. Distributed storage scares me to be honest, since when it fails it fails miserably from my experience. This would be our first production Ceph deployment and since it is all NVME I feel better about it as I've seen Ceph perform pretty bad with spinning disks.
 
Last edited:
That still doesnt really describe anything; the more work you put into planning the more you can hope to have a solution that will serve the purpose.

1. How many VMs are you deploying?
2. What is the nature of "heavy I/O"? that might mean different things to me then to you. application, public traffic type and private traffic in IOPs would probably sufffice.
3. corrolary to the above, 10gbit for ceph traffic may be insufficient.
4. Do you intend to run hyperconverged? (eg, mix the same nodes for compute and storage)
5. what does "HA is critical" mean? what is the consequence, in dollars, per minute of downtime? that should help how to address equipment type and scale.

I've seen Ceph perform pretty bad with spinning disks.
Without a use case, "pretty bad" means basically nothing. spinning disks are a perfectly valid solution for a scaleout filesystem for large asset types.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!