Unable to create new inotify object: Too many open files at /usr/share/perl5 ...

Rudi Swennen

Member
May 14, 2014
24
0
21
Heverlee, Belgium
Hello,

I was installing openvpn-as on 70 lxc-servers. On the 10th server that was installing I got the error:
Unable to create new inotify object: Too many open files at /usr/share/perl5/PVE/INotify.pm line 388.

I installed openvpn-as on the lxc-servers with the command:
for i in {100..170}
do
pct exec $i -- dpkg -i openvpn-as.deb
done

And cipher is the kvm-node-proxmox node

From that moment on PCT-commands gave constantly these errors.

root@cipher:~/eduvirt# pct list
Unable to create new inotify object: Too many open files at /usr/share/perl5/PVE/INotify.pm line 388.
root@cipher:~/eduvirt# pveversion -v
proxmox-ve: 4.0-13 (running kernel: 4.2.0-1-pve)
pve-manager: 4.0-39 (running version: 4.0-39/ab3cc94a)
pve-kernel-3.19.8-1-pve: 3.19.8-3
pve-kernel-4.1.3-1-pve: 4.1.3-7
pve-kernel-4.2.0-1-pve: 4.2.0-13
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-19
qemu-server: 4.0-25
pve-firmware: 1.1-7
libpve-common-perl: 4.0-24
libpve-access-control: 4.0-8
libpve-storage-perl: 4.0-23
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-8
pve-container: 0.9-23
pve-firewall: 2.0-11
pve-ha-manager: 1.0-7
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve1~jessie
root@cipher:~/eduvirt# pct list
Unable to create new inotify object: Too many open files at /usr/share/perl5/PVE/INotify.pm line 388.
root@cipher:~/eduvirt# pct start 101
Unable to create new inotify object: Too many open files at /usr/share/perl5/PVE/INotify.pm line 388.
root@cipher:~/eduvirt# pct start 101

Unable to create new inotify object: Too many open files at /usr/share/perl5/PVE/INotify.pm line 388.

Kind regards,

Rudi
 
Last edited:
Previous to that everything worked fine, all 70 container running?

I try to reproduce your error with stress testing 11 containers in an endless loop with pct exec commands, but no success yet.

Something strange in the logs?

Can you describe your setup more precise, so that I directly can try to reproduce the error?
Maybe reboot the system and try that scenario again?
 
Curious, what's your `ulimit -n`?
 
Ah, the inotify part is the key bit of information: fs.inotify.max_user_instances defaults to 128 and apparently LXC installs intofiy watches for the container's /var/run/utmp. Add that to inotify handlers already present on the host and used by programs inside the containers and you can very quickly reach the count of 128 when you fire up 70 containers...
You can try changing the limit... sysctl fs.inotify.max_user_instances=256 should allow a few more containers to start...

(There's a comment in the LXC code saying «that should probably be configurable», I agree ;-) )
 
I am also hitting this problem with LXC containers. I had just 19 of them and pveproxy service died because of that:
Jan 12 06:25:06 hybrid1 pveproxy[10544]: start failed - Unable to create new inotify object: Too many open files at /usr/share/perl5/PVE/INotify.pm line 388.
Jan 12 06:25:06 hybrid1 pveproxy[10544]: start failed - Unable to create new inotify object: Too many open files at /usr/share/perl5/PVE/INotify.pm line 388.
Jan 12 06:25:06 hybrid1 systemd[1]: pveproxy.service: control process exited, code=exited status=255
Jan 12 06:25:06 hybrid1 systemd[1]: Failed to start PVE API Proxy Server.
Jan 12 06:25:06 hybrid1 systemd[1]: Unit pveproxy.service entered failed state.
 
We have this now much often on diferent hosts! How it´s handled per host and per container?
moment works for us fine

fs.inotify.max_user_instances = 512

but I dont thing so thats a good resolution?!
 
hit this problem today on one pve-host running PVE-5.4-13.
on a newer PVE-6.0-9 host the limit is set to 65536

anyone able to clarify on this issue?
 
the mentioned commit is included since 6.0, currently it was not selected for backporting, may do so - but that could need a bit of time. You can always just upgrade to PVE 6.0 though, to mitigate this and get the latest fixes and features.
Or, you can naturally also just set the sysctl values yourself.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!