Ceph statistics show incorrect value

Discussion in 'Proxmox VE: Installation and configuration' started by senyapsudah, Aug 13, 2018.

Tags:
  1. senyapsudah

    senyapsudah New Member

    Joined:
    Oct 22, 2013
    Messages:
    7
    Likes Received:
    0
    Hi Team,

    today i notice some abnormality on our ceph statistics in proxmox. when i click on ceph it show usage statistics of 60% usage. but when i go to ceph>pools it show me 49.27%

    Previously i have 3 pools and i have already move all the content from 1 of the pool to another pool and remove it. so i left with 2 pool. during this moving process i notice the statistics has increase and does not show the same value.

    below are the result of ceph df

    GLOBAL:
    SIZE AVAIL RAW USED %RAW USED
    138T 57080G 84385G 59.65
    POOLS:
    NAME ID USED %USED MAX AVAIL OBJECTS
    container 3 2146M 0.02 8910G 766
    vmstorage 4 28123G 75.94 8910G 7208221


    below are the result fo ceph -s

    cluster dd9e901d-bd2d-4b66-a7fb-d5f9a8bdc9bb
    health HEALTH_OK
    monmap e32: 6 mons at {1=10.10.10.4:6789/0,5=10.10.10.8:6789/0,6=10.10.10.11:6789/0,cloudhost03=10.10.10.3:6789/0,cloudhost05=10.10.10.5:6789/0,cloudstorage02=10.10.10.10:6789/0}
    election epoch 1214, quorum 0,1,2,3,4,5 cloudhost03,1,cloudhost05,5,cloudstorage02,6
    osdmap e65043: 37 osds: 37 up, 37 in
    flags sortbitwise,require_jewel_osds
    pgmap v93412903: 1536 pgs, 2 pools, 28125 GB data, 7040 kobjects
    84385 GB used, 57080 GB / 138 TB avail
    1536 active+clean
    client io 7134 kB/s rd, 11472 kB/s wr, 125 op/s rd, 968 op/s wr

    can anyone help on this? or at least what can i do to correct this?
     
  2. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,621
    Likes Received:
    140
    The percentage of the pool of the pool gives you an indication of how much data you can put into the pool. While the global percentage tells you how much space on the whole cluster has been used. The difference is, that the global values has the replication taken into account. And to make it more complicated, the pool value can also diverge if you have different crush rules (eg. device classes).

    https://forum.proxmox.com/threads/ceph-raw-usage-grows-by-itself.38395/#post-189842
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. senyapsudah

    senyapsudah New Member

    Joined:
    Oct 22, 2013
    Messages:
    7
    Likes Received:
    0
    Hi alwin,

    thanks for your reply. quick check, is there any good things will happened if i increase the placement group? currently i use 1024 for that group which is replication of 3 and min 2. We are planning to add more disk within these few months. so will it be a good decision if we upgrade the placement group to 2048?

    looking forward for your recommendation.
     
  4. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,621
    Likes Received:
    140
    The calculator gives you the possibility to use a target PG count of 200 (up to double the size) in the foreseeable future.
    https://ceph.com/pgcalc/
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice