Hi to all,
I just updated my 4 nodes ceph cluster to latest proxmox 6.2, but after that I was receiving in my pve dashboard some errors related to the available space by ceph's mon. So searching with df -h I found that my root partition was around 75% on a 136GB sas 15k disk. At this point I was thinking that the issue was there so I purged a lot of old kernels( in this particular setup I'm using proxmox from version 3 so I found a lot of kernels) but the alert was still there.. furthermore one of my four nodes was giving me errors that there is no no more available space, so I wasn't able to do anything ( migration, start, stop apt or anything else ). At this point I went deeper with the cleaning of kernels, so I only left the last ones and the server started to works but the warning was still there.. so searching around I found that with df -i my root partition was still at 90% and that was the problem.. df -i before cleaning all the kernels was at 100%. Now this is my situation
as you can see with df -h I have a lot of space available, but with df -i inodes I'm still around 89%. I can increase 5 or 6 GB the LVM partition but maybe there is something I can clear to fix this.. I don't know if I can reduce the amount of logs or a lot of little files around..
many thanks
I just updated my 4 nodes ceph cluster to latest proxmox 6.2, but after that I was receiving in my pve dashboard some errors related to the available space by ceph's mon. So searching with df -h I found that my root partition was around 75% on a 136GB sas 15k disk. At this point I was thinking that the issue was there so I purged a lot of old kernels( in this particular setup I'm using proxmox from version 3 so I found a lot of kernels) but the alert was still there.. furthermore one of my four nodes was giving me errors that there is no no more available space, so I wasn't able to do anything ( migration, start, stop apt or anything else ). At this point I went deeper with the cleaning of kernels, so I only left the last ones and the server started to works but the warning was still there.. so searching around I found that with df -i my root partition was still at 90% and that was the problem.. df -i before cleaning all the kernels was at 100%. Now this is my situation
Code:
root@nodo1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 23M 6.3G 1% /run
/dev/mapper/pve-root 34G 3.8G 28G 12% /
tmpfs 32G 63M 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sde1 93M 5.4M 87M 6% /var/lib/ceph/osd/ceph-3
/dev/sdd1 93M 5.4M 87M 6% /var/lib/ceph/osd/ceph-2
/dev/sdb1 93M 5.4M 87M 6% /var/lib/ceph/osd/ceph-0
/dev/sdc1 93M 5.4M 87M 6% /var/lib/ceph/osd/ceph-1
/dev/fuse 30M 44K 30M 1% /etc/pve
192.168.25.202:/mnt/ANEKUP_POOL/Proxmox_Backup 7.9T 3.6T 4.3T 46% /mnt/pve/anekup
//192.168.25.100/TS_SYNCRO 50G 41G 8.6G 83% /mnt/pve/ts_syncro
tmpfs
Code:
root@nodo1:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 8233697 710 8232987 1% /dev
tmpfs 8239841 2491 8237350 1% /run
/dev/mapper/pve-root 2228224 1965400 262824 89% /
tmpfs 8239841 131 8239710 1% /dev/shm
tmpfs 8239841 31 8239810 1% /run/lock
tmpfs 8239841 18 8239823 1% /sys/fs/cgroup
/dev/sde1 12672 19 12653 1% /var/lib/ceph/osd/ceph-3
/dev/sdd1 12672 19 12653 1% /var/lib/ceph/osd/ceph-2
/dev/sdb1 12672 19 12653 1% /var/lib/ceph/osd/ceph-0
/dev/sdc1 12672 19 12653 1% /var/lib/ceph/osd/ceph-1
/dev/fuse 10000 95 9905 1% /etc/pve
192.168.25.202:/mnt/ANEKUP_POOL/Proxmox_Backup 609286335 312 609286023 1% /mnt/pve/anekup
//192.168.25.100/TS_SYNCRO 0 0 0 - /mnt/pve/ts_syncro
tmpfs 8239841 11 8239830 1% /run/user/0
as you can see with df -h I have a lot of space available, but with df -i inodes I'm still around 89%. I can increase 5 or 6 GB the LVM partition but maybe there is something I can clear to fix this.. I don't know if I can reduce the amount of logs or a lot of little files around..
many thanks