Regarding Pool Size

Sean Png

New Member
Jun 21, 2019
16
0
1
26
Hi,

May I know pool size is allocating automatically at the cluster or can we utilize maximum space in the same pool or will it remain same as it is?

Looking forward to your reply.

Thank you.
 
Sorry, but I do not have the word 'pool' anywhere on my cluster, can you please take a screenshot of what you mean?

If you mean the total storage resources, than this is the sum of all resources of your cluster. Even for one system, it aggregates all the storages, e.g. if you can have ZFS and mutiple types of storage on ZFS, you will have the aggregated used and free or all of them yielding ridiculously high numbers.
 
Our total ceph cluster is having about 76 TB storage and we have only 1 ceph pool created on top of that. However that Ceph pool is showing only 59 TB total space and it leads to "pool nearfull" warning sometimes. We have about 80 OSDs and are using as 2048 as the pg_num for the created pool. May I know if there anyway to make that single pool to make use of the whole CEPH available size? Like if Ceph total available space is 76 TB, the pool total space will be also needs to be 76 TB
 
You available space from osd is different from what space you can use. For example if you have 8 no's of 2Tb disks , it does not mean you can use entire 16tb. As ceph pool size depends on number of copies to be maintained , it is calculated based on pool size, if you have a pool size of 3, it means you keep 1 data, 2 copies ie your total capacity is theoretically reduced to 1/3rd ie out of 16, you can use approx 6Tb
So check ur pool size and post here
 
Please see attached. When checking the space from Ceph, it showing total OSD space 90.16 TiB and 59.86 TiB used in it. Also, in the datacenter -> summery, storage is showing 33.17 TiB and used is 24.47 TiB. But when checking in pool size, it is showing 59.71 TiB total. Please advise how the pool size is being calculated and compared with both total OSD and actual available space, why pool total size is showing entirely different. Please advise how the pool total is calculating.
 

Attachments

  • screenshot-px-sg1-n1.readyspace.com_8006-2020.06.22-15_31_13.png
    screenshot-px-sg1-n1.readyspace.com_8006-2020.06.22-15_31_13.png
    22.7 KB · Views: 20
  • screenshot-px-sg1-n9.readyspace.com_8006-2020.06.22-15_34_05.png
    screenshot-px-sg1-n9.readyspace.com_8006-2020.06.22-15_34_05.png
    14.5 KB · Views: 20
  • screenshot-px-sg1-n9.readyspace.com_8006-2020.06.22-15_32_13.png
    screenshot-px-sg1-n9.readyspace.com_8006-2020.06.22-15_32_13.png
    13.9 KB · Views: 20
Hi,

Please have a look at the output of the following commands

ceph df
===========================================================
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 100 TiB 39 TiB 60 TiB 60 TiB 60.50
TOTAL 100 TiB 39 TiB 60 TiB 60 TiB 60.50

POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
rbd-vm 1 20 TiB 5.72M 60 TiB 73.18 7.4 TiB
==============================================================


ceph -s
==================================================================
cluster:
id: a50e7cbf-25aa-4116-ba42-94b71230b53b
health: HEALTH_WARN
1 daemons have recently crashed services:
mon: 3 daemons, quorum px-sg1-n1,px-sg1-n2,px-sg1-n3 (age 100m)
mgr: px-sg1-n1(active, since 6h), standbys: px-sg1-n2, px-sg1-n3
osd: 89 osds: 89 up (since 6h), 89 in (since 6h) data:
pools: 1 pools, 2048 pgs
objects: 5.72M objects, 21 TiB
usage: 60 TiB used, 39 TiB / 100 TiB avail
pgs: 2048 active+clean io:
client: 297 MiB/s rd, 10 MiB/s wr, 3.74k op/s rd, 992 op/s wr
====================================================================

ceph osd dump
==================================================================
epoch 80530
fsid a50e7cbf-25aa-4116-ba42-94b71230b53b
created 2019-03-19 17:07:45.059870
modified 2020-06-28 19:08:08.424931
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 638
full_ratio 0.95
backfillfull_ratio 0.93
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release nautilus
pool 1 'rbd-vm' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2048 pgp_num 2048 autoscale_mode warn last_change 75270 lfor 0/0/55247 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
removed_snaps [1~a3,a5~2e,d4~3,d9~18,f2~4,fc~2,ff~2,103~20,124~7,12c~5,132~d,140~13,154~4,159~7]
max_osd 89
osd.0 up in weight 0.900024 up_from 55209 up_thru 80506 down_at 55177 last_clean_interval [54918,55176) v1:10.10.88.1:6824/525574 v1:10.10.88.1:6826/525574 exists,up 57b56727-1c13-4873-a041-09c67917f17a
osd.1 up in weight 1 up_from 55202 up_thru 80501 down_at 55177 last_clean_interval [54919,55176) v1:10.10.88.1:6839/525320 v1:10.10.88.1:6841/525320 exists,up 1598cf21-dc61-41f2-9de7-c5624d199e48
osd.2 up in weight 1 up_from 55186 up_thru 80153 down_at 55177 last_clean_interval [54918,55176) v1:10.10.88.1:6803/525204 v1:10.10.88.1:6805/525204 exists,up 6d3d7cc5-6b7b-4ba9-9d57-879590276cf1
osd.3 up in weight 0.950012 up_from 77578 up_thru 80507 down_at 77575 last_clean_interval [77400,77577) [v2:10.10.88.2:6832/372500,v1:10.10.88.2:6833/372500] [v2:10.10.88.2:6816/9372500,v1:10.10.88.2:6817/9372500] exists,up 4fb81786-6e41-4203-a842-19d64b5943b7
osd.4 up in weight 1 up_from 77579 up_thru 80306 down_at 77573 last_clean_interval [75245,77578) [v2:10.10.88.2:6840/372595,v1:10.10.88.2:6841/372595] [v2:10.10.88.2:6828/3372595,v1:10.10.88.2:6829/3372595] exists,up ff153fdb-4046-43bd-85e6-1a09a7291b0d
osd.5 up in weight 0.92424 up_from 77581 up_thru 80156 down_at 77577 last_clean_interval [77400,77580) [v2:10.10.88.2:6808/372351,v1:10.10.88.2:6809/372351] [v2:10.10.88.2:6810/5372351,v1:10.10.88.2:6811/5372351] exists,up 5e7409df-a5be-47c8-87aa-5df44fd525d1
osd.6 up in weight 1 up_from 79527 up_thru 80516 down_at 0 last_clean_interval [0,0) [v2:10.10.88.3:6800/28174,v1:10.10.88.3:6801/28174] [v2:10.10.88.3:6802/28174,v1:10.10.88.3:6803/28174] exists,up 9c671f0e-5cec-4045-9263-4e184537a94d
osd.7 up in weight 1 up_from 79533 up_thru 80292 down_at 0 last_clean_interval [0,0) [v2:10.10.88.3:6808/28771,v1:10.10.88.3:6809/28771] [v2:10.10.88.3:6810/28771,v1:10.10.88.3:6811/28771] exists,up 81b008d7-ce24-4908-bb9b-e339efe0e252
osd.8 up in weight 1 up_from 79539 up_thru 80136 down_at 0 last_clean_interval [0,0) [v2:10.10.88.3:6816/29420,v1:10.10.88.3:6817/29420] [v2:10.10.88.3:6818/29420,v1:10
=========================================================================================================================
 
Hi,

Please see attached. When checking the space from Ceph, it showing total OSD space 90.16 TiB and 59.86 TiB used in it.
This is the used and total size of your placement group map.

Also, in the datacenter -> summery, storage is showing 33.17 TiB and used is 24.47 TiB.
This is the sum of all storages for the whole cluster and the data usage from the Ceph pool is 20 TiB as you can see in the ceph df (the STORED field is the one without replication). So I suppose this is correct as well.

But when checking in pool size, it is showing 59.71 TiB total.
This is the amount of raw usage of the placment groups. I agree that calling it Total is a bit misleading, but you have to note that it's below Used.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!