Second CEPH pool for SSD

Senin

Member
Jan 8, 2023
29
7
8
Hi!

I'm currently using 3-node PVE cluster with CEPH pool based on HDD (80Tb total).
Now I added 2 SSDs to each node and want to create second separate pool (10Tb total).
I read https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_device_classes but some things are not clear to me.

As I understand I need 2 crush rules - one for hdd class and second for ssd class.
And then I have to bind this rules to CEPH pools.
But what should I do with existing pool?

The documentation says
"If the pool already contains objects, these must be moved accordingly. Depending on your setup, this may introduce a big performance impact on your cluster. As an alternative, you can create a new pool and move disks separately."

So I'm not sure how I can do this on production environment.
Any help?
 
Thank you for quick answer

Are there any possible performance issues?

Anyway I think I should perform this on the weekend.
 
Are there any possible performance issues?
There will be a rebalance. If your cluster is already running at its performance limit, this might push it over to the point where you see performance issues. But if you are at that point, then it is high time to upgrade the cluster or reduce load, as it should be able to handle such situations. Otherwise, a recovery situation due to some failure might cause the same issues, and those are harder to plan ;)

You can also change the rule of a pool in the web UI when you edit the pool.
 
  • Like
Reactions: gurubert
ok, I got a window to assign new rule to existing pool finally
rebalance took about 6 days but I didn't notice any performance issues
new rule and new pool for ssd class were added without any problem

so the process is pretty easy
 
  • Like
Reactions: kellogs and UdoB
I have a similar situation but instead of adding SSD disk into the server, I am thinking to add new ceph storage node with dedicated NVME. maybe 5 nodes

I wonder if the process would be the same?
 
I have a similar situation but instead of adding SSD disk into the server, I am thinking to add new ceph storage node with dedicated NVME. maybe 5 nodes

I wonder if the process would be the same?
If they are part of the same cluster, then yeah, assign different device classes. Make sure to have enough node for the new device class. 5 nodes does sound reasnoable ;)