Wrong disk usage after upgrade to ceph nautilus

abzsol

Well-Known Member
Sep 18, 2019
93
6
48
Italy
www.abzsol.com
Hi, yesterday i did an upgrade of a 3 node cluster with ceph from 5.4 to 6 following the guide: corosync->proxmox->ceph.

I have a strange view for the space used on the pool at node level, but on ceph dashboard it is ok.

I have:
3 node each with 3x ssd 3.8 TB. Total raw size: 31.44 TB, 1 single pool in 3/2 rule for a total net size of 10.3 Tb.

On ceph dashboard i have the following:
1572522580393.png
1572522605016.png

but in the dashboard i see this:

1572522643176.png

and also checking the pool mounted on pve is really strange:
1572522695080.png
 
if i try the following command it give the following:

Code:
root@pve01:~# ceph-volume simple scan
 stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device
 stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument
Running command: /sbin/cryptsetup status /dev/sdc1
-->  RuntimeError: --force was not used and OSD metadata file exists: /etc/ceph/osd/2-3dbfe812-2f54-4387-bff8-74d54e49ab0a.json
root@pve01:~#