[ask] best practice to optimize max available storage in ceph

n0bie

Member
Dec 28, 2021
69
2
13
36
i am trying ceph latest version under proxmox 7.1 under virtualbox. we use 3 node

root@ceph1:~# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 44 GiB 41 GiB 3.0 GiB 3.0 GiB 6.75
ssd 30 GiB 28 GiB 1.8 GiB 1.8 GiB 6.07
TOTAL 74 GiB 69 GiB 4.8 GiB 4.8 GiB 6.47

--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 8.4 KiB 5 25 KiB 0 21 GiB
ceph-pool 2 32 1.3 GiB 373 4.1 GiB 5.99 21 GiB
cephfs_data 3 114 0 B 0 0 B 0 8.8 GiB
cephfs_metadata 4 32 1.2 MiB 23 3.7 MiB 0.01 8.8 GiB

cephfs2_data 6 32 0 B 0 0 B 0 21 GiB
cephfs2_metadata 7 32 52 KiB 22 244 KiB 0 21 GiB
cephfs3_data 8 32 0 B 0 0 B 0 21 GiB
cephfs3_metadata 9 32 75 KiB 22 312 KiB 0 21 GiB
ceph-pool-ssd 16 32 0 B 0 0 B 0 8.8 GiB

there are 3 osd (SSD) under node1, node2 and node3.. each 10Gb

yes total is 3 x 10GB = 30GB
but when we check ceph-pool-ssd, max available only 8,8GB

we use default size/min: 3/2 (it seems they replicate 3 times into 3 different node..)
pg autoscale mode: on


so by having 3 osd total 30GB: the available space only around 8.8GB (cephfs is empty and ceph-pool-ssd also only 3.7MB)


any idea what is the best practice to set value size/min and mode PG autoscale? normally we use default value.. even with 10 node, does it okay if we still use default value?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!