CEPH storage use grows rapidly despite of space allocation

Sujith Arangan

Well-Known Member
Jan 15, 2018
39
1
48
37
I have configured 3 node proxmox with equal number of OSDs and storage. The storage use keeps increasing despite of the allocated space. The used percentage grows from 99 percent to 100 percent. Why is this happening?
 
With 99% osd usage, you would have / will have a downtime because of mon_osd_full_ratio.
Are you sure your not looking at the lvm view of the osds?
 
With 99% osd usage, you would have / will have a downtime because of mon_osd_full_ratio.
Are you sure your not looking at the lvm view of the osds?
No I was just looking at the summary. But still how can the dedicated storage increase consuming the disk space?
 
The storage use keeps increasing despite of the allocated space
Space allocated where?

Can you please show the output of the following commands within [CODE][/CODE] tags for better readability?

  • ceph -s
  • ceph df
  • pveceph pool ls --noborder (make sure the terminal is large enough or pipe it into a file, as any content that won't fit will be cut off
  • ceph osd df tree
 
Space allocated where?

Can you please show the output of the following commands within [CODE][/CODE] tags for better readability?

  • ceph -s
  • ceph df
  • pveceph pool ls --noborder (make sure the terminal is large enough or pipe it into a file, as any content that won't fit will be cut off
  • ceph osd df tree
Hello Please find the output below. I removed some of the VMs a week ago but after one week again the storage reaches 100 %. I am wondering how the VM storage balloons?


root@node-1:~# ceph -s


cluster:


id: f4bc1d34-85cc-4686-b923-52977765ec61


health: HEALTH_ERR


1 backfillfull osd(s)


1 full osd(s)


3 nearfull osd(s)


4 pool(s) full





services:


mon: 3 daemons, quorum orca1,orca2,orca3 (age 4w)


mgr: orca1(active, since 10w)


mds: 1/1 daemons up


osd: 6 osds: 6 up (since 4w), 6 in (since 4w)





data:


volumes: 1/1 healthy


pools: 4 pools, 193 pgs


objects: 822.04k objects, 3.1 TiB


usage: 9.2 TiB used, 1.2 TiB / 10 TiB avail


pgs: 193 active+clean





root@node-1:~# ceph df


--- RAW STORAGE ---


CLASS SIZE AVAIL USED RAW USED %RAW USED


ssd 10 TiB 1.2 TiB 9.2 TiB 9.2 TiB 88.24


TOTAL 10 TiB 1.2 TiB 9.2 TiB 9.2 TiB 88.24





--- POOLS ---


POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL


.mgr 1 1 27 MiB 7 82 MiB 100.00 0 B


ORCA-CEPH-FS_data 2 32 40 GiB 10.24k 120 GiB 100.00 0 B


ORCA-CEPH-FS_metadata 3 32 6.4 MiB 24 19 MiB 100.00 0 B


ORCA-CEPH-VM 4 128 3.0 TiB 811.77k 9.1 TiB 100.00 0 B


root@orca1:~# pveceph pool ls --noborder


perl: warning: Setting locale failed.


perl: warning: Please check that your locale settings:


LANGUAGE = (unset),


LC_ALL = (unset),


LC_CTYPE = "UTF-8",


LANG = "en_US.UTF-8"


are supported and installed on your system.


perl: warning: Falling back to a fallback locale ("en_US.UTF-8").


Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autosca


.mgr 3 2 1 1 1 on


ORCA-CEPH-FS_data 3 2 32 32 on


ORCA-CEPH-FS_metadata 3 2 32 16 16 on


ORCA-CEPH-VM 3 2 128 128 on


root@node-1:~# ceph osd df tree


ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME


-1 10.47958 - 10 TiB 9.2 TiB 9.2 TiB 94 KiB 23 GiB 1.2 TiB 88.24 1.00 - root default


-3 3.49319 - 3.5 TiB 3.1 TiB 3.1 TiB 32 KiB 7.8 GiB 420 GiB 88.25 1.00 - host orca1


0 ssd 1.74660 1.00000 1.7 TiB 1.5 TiB 1.5 TiB 13 KiB 3.8 GiB 262 GiB 85.36 0.97 90 up osd.0


1 ssd 1.74660 1.00000 1.7 TiB 1.6 TiB 1.6 TiB 19 KiB 4.0 GiB 159 GiB 91.13 1.03 103 up osd.1


-5 3.49319 - 3.5 TiB 3.1 TiB 3.1 TiB 30 KiB 7.5 GiB 421 GiB 88.24 1.00 - host orca2


2 ssd 1.74660 1.00000 1.7 TiB 1.5 TiB 1.5 TiB 11 KiB 3.8 GiB 217 GiB 87.86 1.00 92 up osd.2


3 ssd 1.74660 1.00000 1.7 TiB 1.5 TiB 1.5 TiB 19 KiB 3.7 GiB 204 GiB 88.62 1.00 101 up osd.3


-7 3.49319 - 3.5 TiB 3.1 TiB 3.1 TiB 32 KiB 7.6 GiB 421 GiB 88.24 1.00 - host orca3


4 ssd 1.74660 1.00000 1.7 TiB 1.7 TiB 1.7 TiB 19 KiB 3.8 GiB 89 GiB 95.01 1.08 100 up osd.4


5 ssd 1.74660 1.00000 1.7 TiB 1.4 TiB 1.4 TiB 13 KiB 3.8 GiB 331 GiB 81.48 0.92 93 up osd.5


TOTAL 10 TiB 9.2 TiB 9.2 TiB 97 KiB 23 GiB 1.2 TiB 88.24


MIN/MAX VAR: 0.92/1.08 STDDEV: 4.25


root@node-1:~#
 
Last edited:
Please edit your post and put each output within [CODE][/CODE] tags. Otherwise this is unreadable. Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!