Ceph cluster expanding?

fips

Renowned Member
May 5, 2014
175
7
83
Hi,

I am currently run a 4 node Ceph cluster with an 3/2 pool, 3 Nodes are monitors with OSDs and 1 hold just OSDs.
Now I want to add another node and upgrade both Nodes to monitors, but stay with the 3/2 pool to have the possibility of 2 failing Nodes.

Does the 5. Node need to have the same amount of OSD Disk space (like the other nodes e.g. 2TB) from the beginning?
Or if I add just 1x OSD with 480GB, Ceph will write it completely full?

My plan was to add more and more disks not to burn my budget in 1 week ;-)
 
Now I want to add another node and upgrade both Nodes to monitors, but stay with the 3/2 pool to have the possibility of 2 failing Nodes.
You don't need to add more MONs, it will result in more resource overhead for little to no gain.

Does the 5. Node need to have the same amount of OSD Disk space (like the other nodes e.g. 2TB) from the beginning?
This is always advisable to have an even distribution of OSDs and their sizes.

Or if I add just 1x OSD with 480GB, Ceph will write it completely full?
No, your cluster size may not grow 480 GB. The writes are distributed through the weight of the node, but Ceph may write up to 95% of the disk space and then will switch the pools involved with that OSD to read-only.

My plan was to add more and more disks not to burn my budget in 1 week ;-)
It is easier to add another server with the same capacity as the others, to have enough space in the event of a disaster.
 
You don't need to add more MONs, it will result in more resource overhead for little to no gain.

But what if one of the MONs dies? Do I just set the 4. Node become a Monitor?

Note: All my cluster have just a small single boot disk. I wanted to increase redundancy by adding a 5. Node even if I don't need the additional performance or disk space yet.
 
But what if one of the MONs dies? Do I just set the 4. Node become a Monitor?
Two MONs would need to die to lose quorum and then you can still access the data.

Note: All my cluster have just a small single boot disk. I wanted to increase redundancy by adding a 5. Node even if I don't need the additional performance or disk space yet.
Redundancy for what?
 
Your pool has size 3 / min_size 2, this means that Ceph will try to always reach 3 copies and will still serve data if only two copies are left. So adding another node will produce more copies, just distribute them to more nodes. But as more nodes you add, the more nodes can fail and the Ceph will be still able to return to a healthy state.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!