An help to free inode usage on my servers

silvered.dragon

Renowned Member
Nov 4, 2015
123
4
83
Hi to all,
I just updated my 4 nodes ceph cluster to latest proxmox 6.2, but after that I was receiving in my pve dashboard some errors related to the available space by ceph's mon. So searching with df -h I found that my root partition was around 75% on a 136GB sas 15k disk. At this point I was thinking that the issue was there so I purged a lot of old kernels( in this particular setup I'm using proxmox from version 3 so I found a lot of kernels) but the alert was still there.. furthermore one of my four nodes was giving me errors that there is no no more available space, so I wasn't able to do anything ( migration, start, stop apt or anything else ). At this point I went deeper with the cleaning of kernels, so I only left the last ones and the server started to works but the warning was still there.. so searching around I found that with df -i my root partition was still at 90% and that was the problem.. df -i before cleaning all the kernels was at 100%. Now this is my situation

Code:
root@nodo1:~# df -h
Filesystem                                      Size  Used Avail Use% Mounted on
udev                                             32G     0   32G   0% /dev
tmpfs                                           6.3G   23M  6.3G   1% /run
/dev/mapper/pve-root                             34G  3.8G   28G  12% /
tmpfs                                            32G   63M   32G   1% /dev/shm
tmpfs                                           5.0M     0  5.0M   0% /run/lock
tmpfs                                            32G     0   32G   0% /sys/fs/cgroup
/dev/sde1                                        93M  5.4M   87M   6% /var/lib/ceph/osd/ceph-3
/dev/sdd1                                        93M  5.4M   87M   6% /var/lib/ceph/osd/ceph-2
/dev/sdb1                                        93M  5.4M   87M   6% /var/lib/ceph/osd/ceph-0
/dev/sdc1                                        93M  5.4M   87M   6% /var/lib/ceph/osd/ceph-1
/dev/fuse                                        30M   44K   30M   1% /etc/pve
192.168.25.202:/mnt/ANEKUP_POOL/Proxmox_Backup  7.9T  3.6T  4.3T  46% /mnt/pve/anekup
//192.168.25.100/TS_SYNCRO                       50G   41G  8.6G  83% /mnt/pve/ts_syncro
tmpfs


Code:
root@nodo1:~# df -i
Filesystem                                        Inodes   IUsed     IFree IUse% Mounted on
udev                                             8233697     710   8232987    1% /dev
tmpfs                                            8239841    2491   8237350    1% /run
/dev/mapper/pve-root                             2228224 1965400    262824   89% /
tmpfs                                            8239841     131   8239710    1% /dev/shm
tmpfs                                            8239841      31   8239810    1% /run/lock
tmpfs                                            8239841      18   8239823    1% /sys/fs/cgroup
/dev/sde1                                          12672      19     12653    1% /var/lib/ceph/osd/ceph-3
/dev/sdd1                                          12672      19     12653    1% /var/lib/ceph/osd/ceph-2
/dev/sdb1                                          12672      19     12653    1% /var/lib/ceph/osd/ceph-0
/dev/sdc1                                          12672      19     12653    1% /var/lib/ceph/osd/ceph-1
/dev/fuse                                          10000      95      9905    1% /etc/pve
192.168.25.202:/mnt/ANEKUP_POOL/Proxmox_Backup 609286335     312 609286023    1% /mnt/pve/anekup
//192.168.25.100/TS_SYNCRO                             0       0         0     - /mnt/pve/ts_syncro
tmpfs                                            8239841      11   8239830    1% /run/user/0

as you can see with df -h I have a lot of space available, but with df -i inodes I'm still around 89%. I can increase 5 or 6 GB the LVM partition but maybe there is something I can clear to fix this.. I don't know if I can reduce the amount of logs or a lot of little files around..
many thanks
 
Try something along the lines of du --inodes -xS | sort -n to go down the rabbit hole.
 
  • Like
Reactions: silvered.dragon
Try something along the lines of du --inodes -xS | sort -n to go down the rabbit hole.


thank you for the reply!
this is the result in / of the most important lines

Code:
root@nodo1:/var/lib/samba/private# du --inodes -xS | sort
--
--
--
203     ./lib/firmware
204     ./usr/share/lintian/overrides
218     ./sbin
225     ./usr/share/man/man7
234     ./usr/share/i18n/charmaps
236     ./usr/share/terminfo/w
241     ./usr/lib/python2.7/encodings
247     ./lib/firmware/radeon
247     ./usr/share/man/man5
256     ./usr/lib/x86_64-linux-gnu/gconv
257     ./usr/share/terminfo/t
261     ./etc/ssl/certs
268     ./boot/grub/x86_64-efi
268     ./usr/lib/grub/x86_64-efi
268     ./usr/sbin
269     ./usr/lib/grub/i386-efi
279     ./boot/grub/i386-pc
292     ./usr/share/terminfo/d
293     ./usr/lib/grub/i386-pc
306     ./lib/systemd/system
323     ./usr/share/terminfo/a
330     ./lib/firmware/amdgpu
340     ./usr/share/pve-manager/touch/resources/themes/images/default/pictos
357     ./usr/share/i18n/locales
375     ./usr/share/consolefonts
419     ./usr/lib/python2.7
433     ./usr/share/mime/application
500     ./usr/include/linux
590     ./usr/share/nmap/scripts
724     ./usr/share/man/man3
802     ./usr/share/bash-completion/completions
860     ./usr/share/man/man8
879     ./usr/bin
991     ./usr/share/man/man1
1048    ./usr/lib/x86_64-linux-gnu
4002    ./var/lib/dpkg/info
1889138 ./var/lib/samba/private/msg.sock

So /var/lib/samba/private/msg.sock is very huge! can I safely delete everything in this folder?
 
But I suppose a deletion alone may not be all of it.
I'm sorry do you mean that all those files are not enough to fill my inodes?
Can I delete all the files with something like rm /var/lib/samba/private/* or there is a specific samba command?
many thanks
 
I'm sorry do you mean that all those files are not enough to fill my inodes?
As they are automatically created, they will show up again.

this is a issue from proxmox > 6.1 and there is an activity here with no developments
https://bugzilla.proxmox.com/show_bug.cgi?id=2333
I think that must be fixed!
Yeah it is messy, but this is an upstream samba client issue. Proxmox VE uses the samba packages from debian. Best higher the ulimit value.
https://wiki.debian.org/Limits
 
  • Like
Reactions: silvered.dragon
Hi thanks everyone for this topic!! I just had the same problem today, not enough inode.

The culprit was samba:

Code:
[...]

1274    ./usr/bin
2203    ./usr/share/man/man3
3577    ./var/lib/dpkg/info
1233206 ./var/lib/samba/private/msg.sock

Is it correct that the actual solution is to manually add a cron task to clean these directories?

Is this a correct crontab:
Bash:
# Because of a bug in samba we need to clean 2 dir very often
# Source https://bugzilla.proxmox.com/show_bug.cgi?id=2333#c10
# Cleanup old files every 6 hours
0 */6 * * * find /var/lib/samba/private/msg.sock -type s -mmin +600 -delete
0 */6 * * * find /var/run/samba/msg.lock -type f -mmin +600 -delete

# Cleanup recent files by checking for process every 5 minutes
*/5 * * * * for file in `find /var/lib/samba/private/msg.sock -type s`; do [ -d "/proc/$(basename "$file")" ] || rm -vf "$file"; done;
*/5 * * * * for file in `find /run/samba/msg.lock -type f`; do [ -d "/proc/$(basename "$file")" ] || rm -vf "$file"; done;

Thanks
 
I haven't confirmed this but I suspect that the root of this issue in my situation was a WD NAS drive that I use for storing backups being offline for weeks. I fixed the connection issue and deleted the files.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!