Proxmox Ceph pgs Warnings

Jun 12, 2020
26
7
23
I have buidlk a new Proxmox cluster and added ceph to it:

# begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable chooseleaf_stable 1 tunable straw_calc_version 1 tunable allowed_bucket_algs 54 # devices device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class hdd device 3 osd.3 class hdd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class hdd device 7 osd.7 class hdd device 8 osd.8 class ssd device 9 osd.9 class ssd device 10 osd.10 class ssd device 11 osd.11 class ssd device 12 osd.12 class ssd device 13 osd.13 class ssd device 14 osd.14 class ssd device 15 osd.15 class ssd device 16 osd.16 class ssd device 17 osd.17 class ssd device 18 osd.18 class ssd device 19 osd.19 class ssd device 20 osd.20 class ssd device 21 osd.21 class ssd device 22 osd.22 class ssd device 23 osd.23 class ssd # types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 zone type 10 region type 11 root # buckets host vmp01 { id -3 # do not change unnecessarily id -9 class hdd # do not change unnecessarily id -2 class ssd # do not change unnecessarily # weight 6.982 alg straw2 hash 0 # rjenkins1 item osd.0 weight 0.873 item osd.1 weight 0.873 item osd.2 weight 0.873 item osd.3 weight 0.873 item osd.4 weight 0.873 item osd.5 weight 0.873 item osd.6 weight 0.873 item osd.7 weight 0.873 } host vmp02 { id -5 # do not change unnecessarily id -10 class hdd # do not change unnecessarily id -4 class ssd # do not change unnecessarily # weight 6.982 alg straw2 hash 0 # rjenkins1 item osd.8 weight 0.873 item osd.9 weight 0.873 item osd.10 weight 0.873 item osd.11 weight 0.873 item osd.12 weight 0.873 item osd.13 weight 0.873 item osd.14 weight 0.873 item osd.15 weight 0.873 } host vmp03 { id -7 # do not change unnecessarily id -11 class hdd # do not change unnecessarily id -6 class ssd # do not change unnecessarily # weight 6.982 alg straw2 hash 0 # rjenkins1 item osd.16 weight 0.873 item osd.17 weight 0.873 item osd.18 weight 0.873 item osd.19 weight 0.873 item osd.20 weight 0.873 item osd.21 weight 0.873 item osd.22 weight 0.873 item osd.23 weight 0.873 } root default { id -1 # do not change unnecessarily id -12 class hdd # do not change unnecessarily id -8 class ssd # do not change unnecessarily # weight 20.947 alg straw2 hash 0 # rjenkins1 item b06x-vmp01 weight 6.982 item b06x-vmp02 weight 6.982 item b06x-vmp03 weight 6.982 } # rules rule replicated_rule { id 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } # end crush map Logs


Ceph itself works fine but I am confused about the pgs:


View attachment 32629



I have created 4 ceph pools each 6/3:

I had to change it to 256 pgs (warnings still showing up)

ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)

Update: I have noch created 3 Pool each 3/2 and the status switched to ok, does it mean the 6/3 is not working ?


Code:
[CODE]root@bvmp01:~# ceph status

  cluster:

    id:     748e7823-8c16-4af6-a257-caeba9c6977d

    health: HEALTH_OK


  services:

    mon: 3 daemons, quorum b06x-vmp01,b06x-vmp02,b06x-vmp03 (age 3h)

    mgr: b06x-vmp02(active, since 3h), standbys: b06x-vmp03, b06x-vmp01

    osd: 24 osds: 24 up (since 3h), 24 in (since 3h)


  data:

    pools:   4 pools, 97 pgs

    objects: 0 objects, 0 B

    usage:   2.0 GiB used, 21 TiB / 21 TiB avail

    pgs:     97 active+clean


root@vmp01:~# ceph osd pool autoscale-status

POOL                     SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  PROFILE

device_health_metrics      0                 3.0        21449G  0.0000                                  1.0       1              on         scale-up

vm01-ceph                  0                 3.0        21449G  0.0000                                  1.0      32              on         scale-up

vm02-ceph                  0                 3.0        21449G  0.0000                                  1.0      32              on         scale-up

vm03-ceph                  0                 3.0        21449G  0.0000                                  1.0      32              on         scale-up
 
Last edited:
Hello,

How many nodes are in the cluster? If less than 6 that would explain it. size=6 would try to store 6 replicas of each object, one per node. I would recommend a single Ceph pool set to 3/2 in general, the main exception would be if you have two sets of OSDs with very different capabilities, e.g. HDDs vs SSDs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!