PVE-Ceph, Adding multiple disks to a existing pool

I just want to be sure when you say "Data and storage networks are separated of course." we are referring to ceph front and ceph back are separate, on dedicated 25 GbE pairs, right?

What device class do you have on your existing pools? If a different device class is assigned to the new, there shouldn't be nearly as much to chew on.
 
There's nothing "wrong" with your configuration. if it works for you, it works.

My only comment to you is that you have very few osds. since I assume your guest count is small this is probably ok, more osds=more performance and resilience. going from 2 osds/node to 4 is a great start.
I know the principle of horizontal scaling ;-)
Actually, I got 8 bays on every node. Using (historically) 2 for a pair of hard disk with its accompanying SSD for WAL/DB and six bays for SSD.
We are moving out other bays with SATA SSD for more and larger SAS. Ending in 4 x SAS SSD per Node and leaving some space for growing.
We are able to grow to more nodes, but these are equipped with slightly newer cpus leaving the live migration to a difficult action.
 
I just want to be sure when you say "Data and storage networks are separated of course." we are referring to ceph front and ceph back are separate, on dedicated 25 GbE pairs, right?
We are talking about ProxMox with Ceph (as this is a ProxMox forum), not a Ceph only storage cluster. Sorry, if this wasn't clear in the beginning.
So there are two bridges (each 2x25 GbE LACP), one for the vm traffic (to the outside or in between the vm) and one for storage.
So on the storage bridge is ingress and egress (read/write from the VM through the node running them) as well as the ceph traffic between the nodes.
The interfaces are not looking stressed.
 
Last edited:
PVE-ceph versus regular ceph does not apply in this discussion.

One of the most important best practices you might want to consider is that your separation of "storage" network from your production VM network is not sufficient. Your storage network must (should be seriously considered to) be divided further into 2 more networks: First there will be the "public" network for mon, mgr, and RBD clients (the disks mounted in your VMs), and the second network is dedicated to OSD replication, recovery, and balancing, and it is best for each network to have dedicated physical pairs of links, so you would be running 4x 25 GbE in this example.

While we are in the neighborhood of best practices, many people will also create yet another dedicated link for an additional corosync ring.

You may not always see high utilization on your "storage network" but the moment you are in a degraded or misplaced state, your RBD performance will be reduced at the behest of your CRUSH rule while system works to satisfy the rule, no matter the controversy about the particulars there may have been, earlier, all i/o comes at some cost.

In the ceph world, hard disk, SSD, SAS SSD are not necessarily device classes. As far as I know you are limited to 3 choices: (literally hdd, ssd, nvme) in the PVE GUI. While I believe you can create custom device classes, the nature of this discussion so far leads me to doubt that it has been done in your setup.

So I'm not convinced you are getting what we are asking with respect to the pool-rule-class assignment relationship. Each pool has a crush rule and each crush rule will OPTIONALLY detail the device class permitted to be used.

What we want to know is if you have crush rules with device class restrictions such that the addition of a new class will determine how data is distributed as the cluster expands, or perhaps not at all.

Just run these commands and post the results.

cat /etc/pve/ceph.conf
ceph osd df tree
ceph osd crush rule dump
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!