Adding new Ceph node to existing cluster

fcukinyahoo

New Member
Nov 29, 2012
27
0
1
I have a Ceph and Proxmox running without any problems. Now considering future scenarios such as running out of space on current Ceph cluster.

1. Creating a new node with drives, adding to the already existing cluster makes sense. However, how do we add the new drives in the new node to the already existing and used pool? The pool had been defined and being used and I don't see an option to edit it on Proxmox GUI. I can go via CLI but couldn't find any article on how to do it Proxmox friendly way.

2. Assuming I successfully added the new node to the existing cluster, and new drives to existing pool, how does the rebalance work? Don't I have to update the specifics of the pool such as "size", "pg_num" etc?

To summarize, I am trying to find out how to grow with time. Adding new ceph node seems clear however how the new drives are added to already working cluster and pool is the unknown. Can someone here clarify it or guide us to a good source. I have been reading the Ceph documentation which I am not sure if it applies to our case in proxmox specifics, even though underlying technology does not change on the ceph side.

Thanks in advance.
 
I have a Ceph and Proxmox running without any problems. Now considering future scenarios such as running out of space on current Ceph cluster.

1. Creating a new node with drives, adding to the already existing cluster makes sense. However, how do we add the new drives in the new node to the already existing and used pool? The pool had been defined and being used and I don't see an option to edit it on Proxmox GUI. I can go via CLI but couldn't find any article on how to do it Proxmox friendly way.
Hi,
with an default crushmap the use of nodes/OSDs is pool independed. If you add an additional node to your ceph-cluster with an OSD the data of all pools will be reblalanced over the whole cluster.

And proxmox do not special things here - the offical ceph docs are working.
2. Assuming I successfully added the new node to the existing cluster, and new drives to existing pool, how does the rebalance work?
this part is done by ceph - automaticly
Don't I have to update the specifics of the pool such as "size", "pg_num" etc?
pg_num + pgp_num is the only value which can be adjusted here - (see pgcalc on the ceph-website). But change this values produce massive IO (depends how full your cluster is) and you can extend the numbers only - never go back!

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!