Scenario
I have a Proxmox VE 8.3.1 cluster with 12 nodes, using CEPH as distributed storage. The cluster consists of 96 OSDs, distributed across 9 servers with SSDs and 3 with HDDs. Initially, my setup had only two servers with HDDs, and now I need to add a third node with HDDs so the pool can remain consistent.
However, I’m not sure about the impact of this change, my google-fu wasn’t strong enough to make me feel confident.
(Note: I have production VMs running.)
Questions and Help Request:
- What is the impact of this change?
- What would be the recommendation for my scenario?
Pool:
Bash:
pool #|Name|Size|# of Placement group|Optimal # of PGs|Autoscaler Mode| CRUSH RULE(ID)| Used %
3 - | ssd-pool| 3/2| 1024| N/A| On| ssd-replicated-rule(1)| 20.78 TiB(36.21%)
4 - | hdd-pool| 2/2| 512| 512| On| hdd-replicated-rule(1)| 114.83 TiB(48.47%)
6 - | .mgr| 3/2| 1| N/A| On| replicated_rule(0)| 444.39 MiB(0.00%)
OSD Tree and Crushmap in attachment.
Configuration:
Bash:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = X.X.X.X/24
fsid = 52d10d07-2f32-41e7-b8cf-7d7282af69a2
mon_allow_pool_delete = true
mon_host = X.X.X.X X.X.X.X X.X.X.X X.X.X.X X.X.X.X X.X.X.X
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = X.X.X.X/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[client.crash]
keyring = /etc/pve/ceph/$cluster.$name.keyring
[mon.pve114]
public_addr = X.X.X.X
[mon.pve115]
public_addr = X.X.X.X
[mon.pve117]
public_addr = X.X.X.X
[mon.pve118]
public_addr = X.X.X.X
[mon.pve119]
public_addr = X.X.X.X
[mon.pve142]
public_addr = X.X.X.X
Configuration Database:

Used(%) HDDs:

Used(%) SSD:



References:
https://docs.ceph.com/en/reef/rados/operations/pools/#setting-the-number-of-rados-object-replicas
Any help from the community would be greatly appreciated!
I can provide logs or additional command outputs if needed.
Thanks in advance for your support!
Attachments
Last edited:

