Hi Team,
today i notice some abnormality on our ceph statistics in proxmox. when i click on ceph it show usage statistics of 60% usage. but when i go to ceph>pools it show me 49.27%
Previously i have 3 pools and i have already move all the content from 1 of the pool to another pool and remove it. so i left with 2 pool. during this moving process i notice the statistics has increase and does not show the same value.
below are the result of ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
138T 57080G 84385G 59.65
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
container 3 2146M 0.02 8910G 766
vmstorage 4 28123G 75.94 8910G 7208221
below are the result fo ceph -s
cluster dd9e901d-bd2d-4b66-a7fb-d5f9a8bdc9bb
health HEALTH_OK
monmap e32: 6 mons at {1=10.10.10.4:6789/0,5=10.10.10.8:6789/0,6=10.10.10.11:6789/0,cloudhost03=10.10.10.3:6789/0,cloudhost05=10.10.10.5:6789/0,cloudstorage02=10.10.10.10:6789/0}
election epoch 1214, quorum 0,1,2,3,4,5 cloudhost03,1,cloudhost05,5,cloudstorage02,6
osdmap e65043: 37 osds: 37 up, 37 in
flags sortbitwise,require_jewel_osds
pgmap v93412903: 1536 pgs, 2 pools, 28125 GB data, 7040 kobjects
84385 GB used, 57080 GB / 138 TB avail
1536 active+clean
client io 7134 kB/s rd, 11472 kB/s wr, 125 op/s rd, 968 op/s wr
can anyone help on this? or at least what can i do to correct this?
today i notice some abnormality on our ceph statistics in proxmox. when i click on ceph it show usage statistics of 60% usage. but when i go to ceph>pools it show me 49.27%
Previously i have 3 pools and i have already move all the content from 1 of the pool to another pool and remove it. so i left with 2 pool. during this moving process i notice the statistics has increase and does not show the same value.
below are the result of ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
138T 57080G 84385G 59.65
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
container 3 2146M 0.02 8910G 766
vmstorage 4 28123G 75.94 8910G 7208221
below are the result fo ceph -s
cluster dd9e901d-bd2d-4b66-a7fb-d5f9a8bdc9bb
health HEALTH_OK
monmap e32: 6 mons at {1=10.10.10.4:6789/0,5=10.10.10.8:6789/0,6=10.10.10.11:6789/0,cloudhost03=10.10.10.3:6789/0,cloudhost05=10.10.10.5:6789/0,cloudstorage02=10.10.10.10:6789/0}
election epoch 1214, quorum 0,1,2,3,4,5 cloudhost03,1,cloudhost05,5,cloudstorage02,6
osdmap e65043: 37 osds: 37 up, 37 in
flags sortbitwise,require_jewel_osds
pgmap v93412903: 1536 pgs, 2 pools, 28125 GB data, 7040 kobjects
84385 GB used, 57080 GB / 138 TB avail
1536 active+clean
client io 7134 kB/s rd, 11472 kB/s wr, 125 op/s rd, 968 op/s wr
can anyone help on this? or at least what can i do to correct this?