Hi, yesterday i did an upgrade of a 3 node cluster with ceph from 5.4 to 6 following the guide: corosync->proxmox->ceph.
I have a strange view for the space used on the pool at node level, but on ceph dashboard it is ok.
I have:
3 node each with 3x ssd 3.8 TB. Total raw size: 31.44 TB, 1 single pool in 3/2 rule for a total net size of 10.3 Tb.
On ceph dashboard i have the following:


but in the dashboard i see this:

and also checking the pool mounted on pve is really strange:

I have a strange view for the space used on the pool at node level, but on ceph dashboard it is ok.
I have:
3 node each with 3x ssd 3.8 TB. Total raw size: 31.44 TB, 1 single pool in 3/2 rule for a total net size of 10.3 Tb.
On ceph dashboard i have the following:


but in the dashboard i see this:

and also checking the pool mounted on pve is really strange:
