Yes, ofc. Runbut after assigning the new rule to the pool it reduce the space before assigning the rule it was showing 13G after assigning the rule it is showing 6.6G.
ceph df
, it should show you that each class has its own total now. As data will only be distributed onto the respective class.Please use CODE tags for posting command output, it will keep the formatting. You can find them under the three dots (...) in the edit window.[root@test-1 my-cluster]# ceph df
It seems you are not on Ceph Nautilus. What is your output ofGLOBAL:
SIZE AVAIL RAW USED %RAW USED
72 GiB 63 GiB 9.1 GiB 12.68
ceph versions
?[root@test-1 my-cluster]# ceph versions
{
"mon": {
"ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)": 3
},
"mgr": {
"ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)": 1
},
"osd": {
"ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)": 9
},
"mds": {},
"overall": {
"ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)": 13
}
}
[admin@kvm5d ~]# ceph df RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
nvme 17 TiB 16 TiB 1.1 TiB 1.1 TiB 6.33
ssd 81 TiB 42 TiB 39 TiB 39 TiB 48.65
TOTAL 99 TiB 58 TiB 40 TiB 41 TiB 41.15
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
rbd_ssd 0 10 TiB 3.78M 31 TiB 49.14 11 TiB
cephfs_data 2 18 GiB 4.59k 54 GiB 0.16 11 TiB
cephfs_metadata 3 5.4 MiB 60 17 MiB 0 11 TiB
ec_nvme 16 12 KiB 1 80 KiB 0 9.2 TiB
rbd_nvme 17 289 GiB 73.94k 866 GiB 5.21 5.1 TiB
ec_compr_nvme 19 154 GiB 41.32k 258 GiB 1.61 9.2 TiB
ec_ssd 20 0 B 0 0 B 0 22 TiB
ec_compr_ssd 21 4.9 TiB 1.74M 7.9 TiB 19.48 22 TiB
device_health_metrics 22 4.0 MiB 20 4.0 MiB 0 5.1 TiB