CEPH: HEALTH_WARN mon 1 is low on available space

Smarti

New Member
Aug 5, 2017
25
0
1
Hi,

I'm not sure what to do about this, can anyone help?

The problem seems to be in /dev/mapper/pve-root - I'm not sure what this is.

proxmox-ve: 5.0-20 (running kernel: 4.10.17-2-pve) pve-manager: 5.0-30 (running version: 5.0-30/5ab26bc) pve-kernel-4.10.17-2-pve: 4.10.17-20 pve-kernel-4.10.15-1-pve: 4.10.15-15 libpve-http-server-perl: 2.0-6 lvm2: 2.02.168-pve3 corosync: 2.4.2-pve3 libqb0: 1.0.1-1 pve-cluster: 5.0-12 qemu-server: 5.0-15 pve-firmware: 2.0-2 libpve-common-perl: 5.0-16 libpve-guest-common-perl: 2.0-11 libpve-access-control: 5.0-6 libpve-storage-perl: 5.0-14 pve-libspice-server1: 0.12.8-3 vncterm: 1.5-2 pve-docs: 5.0-9 pve-qemu-kvm: 2.9.0-4 pve-container: 2.0-15 pve-firewall: 3.0-2 pve-ha-manager: 2.0-2 ksm-control-daemon: 1.2-2 glusterfs-client: 3.8.8-1 lxc-pve: 2.0.8-3 lxcfs: 2.0.7-pve4 criu: 2.11.1-1~bpo90 novnc-pve: 0.6-4 smartmontools: 6.5+svn4324-1 zfsutils-linux: 0.6.5.11-pve17~bpo90 openvswitch-switch: 2.7.0-2 ceph: 12.1.2-pve1
 
Hi ,

the monitor is default installed on the root partition (pve-root).
This means you have no space left on the root partition.

see df -h
 
  • Like
Reactions: yann!ck
Any suggestions as to what can be done? This is a vanilla 4 node proxmox install with CEPH with 3 VM's currently. Not yet in use really so I was very surprised to see this crop up.
This is the output from df ...

Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 42M 3.1G 2% /run
/dev/mapper/pve-root 28G 20G 6.4G 76% /
tmpfs 16G 63M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda2 253M 288K 252M 1% /boot/efi
/dev/fuse 30M 36K 30M 1% /etc/pve
/dev/sdb1 94M 5.5M 89M 6% /var/lib/ceph/osd/ceph-1
tmpfs 3.2G 0 3.2G 0% /run/user/0


This is from du ...
 
I got the same problem:

Code:
mons 1,2 are low on available space
mon.1 has 21% avail
mon.2 has 25% avail

but the place is enough:
Code:
# df -h
Filesystem               Size  Used Avail Use% Mounted on
udev                      63G     0   63G   0% /dev
tmpfs                     13G  1.3G   12G  11% /run
/dev/mapper/pve-root      28G   13G   14G  49% /
tmpfs                     63G   63M   63G   1% /dev/shm
tmpfs                    5.0M     0  5.0M   0% /run/lock
tmpfs                     63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/pve-data      55G   52M   55G   1% /var/lib/vz
/dev/fuse                 30M  132K   30M   1% /etc/pve
tmpfs                     63G   48K   63G   1% /var/lib/ceph/osd/ceph-0
tmpfs                     63G   48K   63G   1% /var/lib/ceph/osd/ceph-4
tmpfs                     63G   48K   63G   1% /var/lib/ceph/osd/ceph-6
tmpfs                     63G   48K   63G   1% /var/lib/ceph/osd/ceph-9
tmpfs                     63G   48K   63G   1% /var/lib/ceph/osd/ceph-12
tmpfs                     13G     0   13G   0% /run/user/0
 
Could you paste the 'df' output of the two hosts where mon.1 and mon.2 are running?
 
HEALTH_WARN mons 1,2 are low on available space MON_DISK_LOW mons 1,2 are low on available space
Check the inode space on the pve-root, it may be close to full.
 
Code:
root@node001:~# ceph health detail
HEALTH_WARN mons 1,2 are low on available space
MON_DISK_LOW mons 1,2 are low on available space
    mon.1 has 20% avail
    mon.2 has 24% avail
root@node001:~# df
Filesystem                                                                          1K-blocks     Used  Available Use% Mounted on
udev                                                                                 65937592        0   65937592   0% /dev
tmpfs                                                                                13199088  1247412   11951676  10% /run
/dev/mapper/pve-root                                                                 28510348 12508740   14530328  47% /
tmpfs                                                                                65995432    51892   65943540   1% /dev/shm
tmpfs                                                                                    5120        0       5120   0% /run/lock
tmpfs                                                                                65995432        0   65995432   0% /sys/fs/cgroup
/dev/mapper/pve-data                                                                 57280788    53112   57211292   1% /var/lib/vz
tmpfs                                                                                65995432       28   65995404   1% /var/lib/ceph/osd/ceph-6
tmpfs                                                                                65995432       28   65995404   1% /var/lib/ceph/osd/ceph-12
tmpfs                                                                                65995432       28   65995404   1% /var/lib/ceph/osd/ceph-4
tmpfs                                                                                65995432       28   65995404   1% /var/lib/ceph/osd/ceph-9
tmpfs                                                                                65995432       28   65995404   1% /var/lib/ceph/osd/ceph-0
/dev/fuse                                                                               30720      132      30588   1% /etc/pve
10.10.10.1,10.10.10.1:6789,10.10.10.2,10.10.10.2:6789,10.10.10.3,10.10.10.3:6789:/ 7908769792 31207424 7877562368   1% /mnt/pve/cephfs1
tmpfs                                                                                13199084        0   13199084   0% /run/user/0

Code:
root@node002:~# ceph health detail
HEALTH_WARN mons 1,2 are low on available space
MON_DISK_LOW mons 1,2 are low on available space
    mon.1 has 20% avail
    mon.2 has 24% avail
root@node002:~# df
Filesystem                                                                          1K-blocks     Used  Available Use% Mounted on
udev                                                                                 65872156        0   65872156   0% /dev
tmpfs                                                                                13179388  1313044   11866344  10% /run
/dev/mapper/pve-root                                                                 33220400 25094456    6733336  79% /
tmpfs                                                                                65896920    48772   65848148   1% /dev/shm
tmpfs                                                                                    5120        0       5120   0% /run/lock
tmpfs                                                                                65896920        0   65896920   0% /sys/fs/cgroup
/dev/mapper/pve-data                                                                 15737816   220164   15501268   2% /var/lib/vz
tmpfs                                                                                65896920       28   65896892   1% /var/lib/ceph/osd/ceph-10
tmpfs                                                                                65896920       28   65896892   1% /var/lib/ceph/osd/ceph-1
tmpfs                                                                                65896920       28   65896892   1% /var/lib/ceph/osd/ceph-16
tmpfs                                                                                65896920       28   65896892   1% /var/lib/ceph/osd/ceph-7
tmpfs                                                                                65896920       28   65896892   1% /var/lib/ceph/osd/ceph-3
tmpfs                                                                                65896920       28   65896892   1% /var/lib/ceph/osd/ceph-13
/dev/fuse                                                                               30720      132      30588   1% /etc/pve
10.10.10.1,10.10.10.1:6789,10.10.10.2,10.10.10.2:6789,10.10.10.3,10.10.10.3:6789:/ 7908773888 31211520 7877562368   1% /mnt/pve/cephfs1
tmpfs                                                                                13179384        0   13179384   0% /run/user/0

inode usage is 9% and 10%
Code:
root@node002:~# df -i /
Filesystem            Inodes  IUsed   IFree IUse% Mounted on
/dev/mapper/pve-root 2113536 174903 1938633    9% /


root@node001:~# df -i /
Filesystem            Inodes  IUsed   IFree IUse% Mounted on
/dev/mapper/pve-root 1818624 170132 1648492   10% /
 
Last edited:
MON_DISK_LOW mons 1,2 are low on available space
mon.1 has 20% avail
mon.2 has 24% avail
The logs should give more insight to this. The MON DB may grow twice in size during compaction and it could be that this included in its calculation.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!