[SOLVED] Issue ceph monitoring - after upgrade to latest 7.4 (before upgrading to 8)

ilia987

Well-Known Member
Sep 9, 2019
281
14
58
38
I am trying to upgrade to proxmox 8.

after finish updating all nodes to 7.4-16, (and rebooted each node after install)
and updating ceph from Pacific to Quincy

i just noticed that in the ceph Performance tab i dont see traffic (i usually have around 300-6000MBS) with 1000+ IOPS
ceph not trafic.png
systems are stable and ceph storage\fs accessible, all servers had full reboot after upgrade to latest 7.4-16

i had another post here (https://forum.proxmox.com/threads/i...e-to-latest-7-4-before-upgrading-to-8.132798/
that resolved wihtout doing anyting
 
the issue is not related to the monitor.
something regarding ceph:

ceph status

Code:
cluster:
    id:     8ebca482-f985-4e74-9ff8-35e03a1af15e
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum pve-srv2,pve-srv3,pve-srv4 (age 21h)
    mgr: pve-srv2(active, since 2d), standbys: pve-srv4, pve-srv3
    mds: 2/2 daemons up, 1 standby
    osd: 32 osds: 32 up (since 33h), 32 in (since 33h)
 
  data:
    volumes: 2/2 healthy
    pools:   6 pools, 705 pgs
    objects: 15.67M objects, 35 TiB
    usage:   106 TiB used, 76 TiB / 182 TiB avail
    pgs:     705 active+clean
 
  io:
    client:   0 B/s rd, 0 B/s wr, 0 op/s rd, 0 op/s wr


ceph osd perf

Code:
osd  commit_latency(ms)  apply_latency(ms)
 32                   0                  0
 31                   0                  0
 30                   0                  0
 12                   0                  0
 11                   0                  0
 10                   0                  0
  9                   0                  0
  8                   0                  0
  7                   0                  0
  6                   1                  1
  5                   0                  0
  4                   0                  0
  3                   0                  0
  2                   0                  0
  1                   0                  0
  0                   0                  0
 29                   0                  0
 13                   0                  0
 14                   0                  0
 15                   0                  0
 16                   0                  0
 17                   0                  0
 18                   1                  1
 19                   0                  0
 20                   0                  0
 21                   0                  0
 22                   0                  0
 23                   0                  0
 24                   0                  0
 25                   0                  0
 26                   0                  0
 27                   1                  1

ceph osd pool stats
Code:
client io 2.1 MiB/s rd, 3.2 MiB/s wr, 136 op/s rd, 210 op/s wr

pool cephfs-data_data id 8
  nothing is going on

pool cephfs-data_metadata id 9
  client io 1023 B/s wr, 0 op/s rd, 0 op/s wr

pool .mgr id 14
  nothing is going on

pool cephfs-shared_data id 18
  nothing is going on

pool cephfs-shared_metadata id 19
  client io 426 B/s rd, 36 KiB/s wr, 0 op/s rd, 8 op/s wr



FIXED (HAD TO RESTART ALL MONITORS AGAIN )
 
Last edited: