ceph report low on monitor space

Ting

Member
Oct 19, 2021
104
5
23
57
HI,

I have a ceph cluster running for a while now, today one of my monitor node "proxmox6" ceph report a low on available space 19% avail. as shown below, I have a lot of available space, not sure where and how I can assign more space to ceph mon.

root@proxmox6:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 1.9M 26G 1% /run
rpool/ROOT/pve-1 216G 76G 140G 35% /
tmpfs 126G 66M 126G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
local-vm-zfs0 48G 128K 48G 1% /local-vm-zfs0
rpool 140G 128K 140G 1% /rpool
rpool/ROOT 140G 128K 140G 1% /rpool/ROOT
rpool/data 140G 128K 140G 1% /rpool/data
/dev/fuse 128M 96K 128M 1% /etc/pve
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-9
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-8
//xxx.xxx.xxx.xxx/Template-VMs 27T 20T 6.4T 76% /mnt/pve/VM_Template
//xxx.xxx.xxx.xxx/Backup_VMs 27T 20T 6.4T 76% /mnt/pve/VM_Backup
tmpfs 26G 0 26G 0% /run/user/0


root@proxmox6:~# du -sh /var/lib/ceph/mon/
3.8M /var/lib/ceph/mon/


and I checked on other monitor nodes, when I ran du -sh, space is about 4.7M

Not sure where and how I can make some improvement. Thanks for your thoughts.
 
Hi, thanks for your help, here is output:

root@proxmox6:~# df -h /var/lib/ceph/mon/*
Filesystem Size Used Avail Use% Mounted on
rpool/ROOT/pve-1 216G 76G 140G 35% /
 
my Ceph turn back to normal after a while, but following is my email notification at that time, can I display some old warning in detail? I do not know how to do that if there is anyway.

HEALTH_WARN

--- New ---
[WARN] MON_DISK_LOW: mon proxmox6 is low on available space
mon.proxmox6 has 30% avail


=== Full health status ===
[WARN] MON_DISK_LOW: mon proxmox6 is low on available space
mon.proxmox6 has 30% avail
 
I found that this works

apt-get autoremove

it removes all the unused dependency's