ceph dashboard oddities

Mar 15, 2022
100
14
23
44
hi all

I have finished a rolling eviction upgrade and rejoin of my nodes to take them from 7.4 to 8.1 but I have an issue with the ceph dashboard as follows any ideas what could be causing it.

1711473566837.png
 
root@wlsc-pxmh01:~# ceph -s
cluster:
id: ab50ab95-b39b-432a-b24f-0858830fb975
health: HEALTH_OK

services:
mon: 7 daemons, quorum wlsc-pxmh10,wlsc-pxmh04,wlsc-pxmh01,wlsc-pxmh03,wlsc-pxmh05,wlsc-pxmh06,wlsc-pxmh02 (age 7d)
mgr: wlsc-pxmh01(active, since 7d), standbys: wlsc-pxmh10, wlsc-pxmh04, wlsc-pxmh02, wlsc-pxmh06, wlsc-pxmh05, wlsc-pxmh03
mds: 2/2 daemons up, 5 standby
osd: 72 osds: 72 up (since 7d), 72 in (since 7d)

data:
volumes: 2/2 healthy
pools: 23 pools, 2001 pgs
objects: 11.28M objects, 43 TiB
usage: 57 TiB used, 194 TiB / 252 TiB avail
pgs: 2001 active+clean

io:
client: 8.3 KiB/s rd, 36 MiB/s wr, 3 op/s rd, 487 op/s wr


it's only shown once on in the output. and we have no varnish etc in the path this is directly from the host web UI's for the whole cluster.
 
it's only shown once on in the output. and we have no varnish etc in the path this is directly from the host web UI's for the whole cluster.
No, I meant that the browser caches this view. And (I forgot the details) there is also some kind of state broadcast of those services between the nodes. I believe this in the pvestatd daemon, maybe restarting this daemon on the nodes might help clear that up.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!