EMLINK: Too many links

aaronstuder

Member
Jul 11, 2020
12
0
6
38
unable to create chunk store 'backups' subdir "/backups/.chunks/fde6" - EMLINK: Too many links

Any suggestions on how to fix this?
 
Which filesystem is the datastore on?
could you please also provide:
Code:
mount
ls -1 /backups/.chunks |wc -l
Thanks!
 
Code:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=479556k,nr_inodes=119889,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=100488k,mode=755)
/dev/vda1 on / type ext3 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=36,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13195)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=100484k,mode=700)

Code:
root@LooseGeneral-VM:~# ls -1 /backups/.chunks |wc -l
64998
 
root@LooseGeneral-VM:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 469M 0 469M 0% /dev
tmpfs tmpfs 99M 3.0M 96M 4% /run
/dev/vda1 ext3 916G 2.1G 867G 1% /
tmpfs tmpfs 491M 0 491M 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 491M 0 491M 0% /sys/fs/cgroup
tmpfs tmpfs 99M 0 99M 0% /run/user/0

Seems like ext3. Maybe this is not supported?
 
Seems like ext3. Maybe this is not supported?
no - I'll send a patch for the docs to make this more explicit:
* each datastore stores the chunks in subdirectories (for performance reasons), which have 2 bytes (written as hexdigits) as names
* this yields 65536 directories (+ . and ..)
some filesystems (ext3, ext4 if certain options are not enabled (they are in the default config) support fewer subdirectories per dir

-> please recreate the datastore as ext4/xfs/zfs
 
no - I'll send a patch for the docs to make this more explicit:
* each datastore stores the chunks in subdirectories (for performance reasons), which have 2 bytes (written as hexdigits) as names
* this yields 65536 directories (+ . and ..)
some filesystems (ext3, ext4 if certain options are not enabled (they are in the default config) support fewer subdirectories per dir

-> please recreate the datastore as ext4/xfs/zfs

Thank you!
 
To resolve this issue I ended up doing the installation uses the ISO file. However ext3 was one of the options during the install. I suggest you remove that option since it will not work :)
 
OK, found one other issue, I think? I did the ISO installation, and when I went to login via the WebGUI it would not work. Command line worked fine. I updated the password via the command line, and was able to login to the WebGUI.
 
I did the installation again using a new password, and I didn't have the password issue this time. (This might be my fault.)

I selected ZFS during installation, but it seems to be LVM? Hm....
 
Hi, this also seems to be an issue on an NFS mount ... any idea how I can increase the number there? The nfs is mounted from a synology ext4 volume
 
Last edited:
Hi, this also seems to be an issue on an NFS mount ... any idea how I can increase the number there? The nfs is mounted from a synology ext4 volume
maybe the ext4 filesystem on synology does not have the dir_nlink tunable set
If you have shell access could you post the output of:
Code:
tune2fs -l <device>
(replace <device> with the blockdevice the ext4 is on)
Thanks!
 
I hope it helps ...

admin@ApollonNAS:~$ sudo tune2fs -l /dev/mapper/vg1-volume_1
Password:
tune2fs 1.42.6 (21-Sep-2012)
Filesystem volume name: 1.41.12-3211
Last mounted on: /volume1
Filesystem UUID: f89e237d-662a-483d-9497-4e426109ea79
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: unsigned_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 182853632
Block count: 731382784
Reserved block count: 25600
Free blocks: 511364210
Free inodes: 182518139
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 849
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sat Jun 15 15:49:03 2013
Last mount time: Wed Jul 22 06:23:36 2020
Last write time: Wed Jul 22 06:23:36 2020
Mount count: 781
Maximum mount count: 27
Last checked: Sat Jun 15 15:49:03 2013
Check interval: 15552000 (6 months)
Next check after: Thu Dec 12 14:49:03 2013
Lifetime writes: 10 TB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: ebf0605d-6047-c658-32bd-e38b948bbbb8
Journal backup: inode blocks

It seems that "dir_index" is missing in the file system features from what I read in web
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!