Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1804.

FuriousRage

Renowned Member
Oct 17, 2014
114
3
83
When i check the syslog of my newly created pve installation i see this getting spammed alot, what can i do about this?
Code:
Dec 04 15:45:41 pve pveproxy[2236]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1804.
Dec 04 15:45:41 pve pveproxy[2236]: error writing access log
Dec 04 15:45:42 pve pveproxy[2236]: worker exit
Dec 04 15:45:42 pve pveproxy[1616]: worker 2236 finished
Dec 04 15:45:42 pve pveproxy[1616]: starting 1 worker(s)
Dec 04 15:45:42 pve pveproxy[1616]: worker 2254 started
Server View
Logs
 
Hi,
please post the output of
Code:
pveversion -v
df -h
df -ih
ls -lh
ls -lh /var/log/pveproxy
 
pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve) pve-manager: 7.1-7 (running version: 7.1-7/df5740ad) pve-kernel-helper: 7.1-6 pve-kernel-5.13: 7.1-5 pve-kernel-5.13.19-2-pve: 5.13.19-4 pve-kernel-5.13.19-1-pve: 5.13.19-3 ceph: 16.2.6-pve2 ceph-fuse: 16.2.6-pve2 corosync: 3.1.5-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.22-pve2 libproxmox-acme-perl: 1.4.0 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.1-5 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.0-14 libpve-guest-common-perl: 4.0-3 libpve-http-server-perl: 4.0-4 libpve-storage-perl: 7.0-15 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.9-4 lxcfs: 4.0.8-pve2 novnc-pve: 1.2.0-3 proxmox-backup-client: 2.1.2-1 proxmox-backup-file-restore: 2.1.2-1 proxmox-mini-journalreader: 1.3-1 proxmox-widget-toolkit: 3.4-4 pve-cluster: 7.1-2 pve-container: 4.1-2 pve-docs: 7.1-2 pve-edk2-firmware: 3.20210831-2 pve-firewall: 4.2-5 pve-firmware: 3.3-3 pve-ha-manager: 3.3-1 pve-i18n: 2.6-2 pve-qemu-kvm: 6.1.0-3 pve-xtermjs: 4.12.0-1 qemu-server: 7.1-4 smartmontools: 7.2-1 spiceterm: 3.2-2 swtpm: 0.7.0~rc1+2 vncterm: 1.7-1 zfsutils-linux: 2.1.1-pve3

df -h
Filesystem Size Used Avail Use% Mounted on udev 7.7G 0 7.7G 0% /dev tmpfs 1.6G 160M 1.4G 11% /run /dev/mapper/pve-root 6.8G 6.8G 0 100% / tmpfs 7.8G 43M 7.7G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock VMs 5.3T 128K 5.3T 1% /VMs /dev/fuse 128M 16K 128M 1% /etc/pve /dev/sdg1 220G 5.3G 203G 3% /mnt/pve/ISOs tmpfs 1.6G 0 1.6G 0% /run/user/0

df -ih

Filesystem Inodes IUsed IFree IUse% Mounted on udev 2.0M 636 2.0M 1% /dev tmpfs 2.0M 963 2.0M 1% /run /dev/mapper/pve-root 448K 59K 390K 13% / tmpfs 2.0M 92 2.0M 1% /dev/shm tmpfs 2.0M 16 2.0M 1% /run/lock VMs 11G 6 11G 1% /VMs /dev/fuse 256K 33 256K 1% /etc/pve /dev/sdg1 14M 21 14M 1% /mnt/pve/ISOs tmpfs 396K 18 396K 1% /run/user/0

ls -lh

drwxr-xr-x 2 root root 4.0K Dec 4 16:35 dump drwxr-xr-x 2 root root 4.0K Dec 4 16:41 images drwx------ 2 root root 16K Dec 4 16:35 lost+found drwxr-xr-x 2 root root 4.0K Dec 4 16:35 private drwxr-xr-x 2 root root 4.0K Dec 4 16:35 snippets drwxr-xr-x 4 root root 4.0K Dec 4 16:35 template

ls -lh /var/log/pveproxy
total 256K -rw-r----- 1 www-data www-data 0 Dec 6 20:16 access.log -rw-r----- 1 www-data www-data 252K Dec 5 00:00 access.log.1
 
df -h
Code:
/dev/mapper/pve-root  6.8G  6.8G     0 100% /
Your root filesystem is full. Use e.g. du -hs * | sort -h starting at / and repeating for the directory with the most usage, to see what is eating up your space.
 
Your root filesystem is full. Use e.g. du -hs * | sort -h starting at / and repeating for the directory with the most usage, to see what is eating up your space.
Starting from root:
Code:
2.9G    usr
3.8G    var

in usr, its lib/modules largest at almost 400Mb each.

during the install i let the installer make the file system automagically, so its bad that it created a very small root and left 30GB for other.
 
in usr, its lib/modules largest at almost 400Mb each.
I'd not touch /usr if you're not absolutely certain about what you're doing.

during the install i let the installer make the file system automagically, so its bad that it created a very small root and left 30GB for other.
Well, you're the admin and know best if you require the local-lvm storage for VMs or not, those that do would complain if the decision went the other way..

If you really do not require the local thin LVM on /dev/pve/data (the local-lvm storage in PVE's storage config) you could remove it and resize the root LV to use its space, e.g. follow :
https://forum.proxmox.com/threads/need-to-delete-local-lvm-and-reuse-the-size.34087/#post-402227
 
I'd not touch /usr if you're not absolutely certain about what you're doing.


Well, you're the admin and know best if you require the local-lvm storage for VMs or not, those that do would complain if the decision went the other way..

If you really do not require the local thin LVM on /dev/pve/data (the local-lvm storage in PVE's storage config) you could remove it and resize the root LV to use its space, e.g. follow :
https://forum.proxmox.com/threads/need-to-delete-local-lvm-and-reuse-the-size.34087/#post-402227
Getting stuck at

root@pve:/etc# lvremove /dev/pve/data /etc/lvm/archive/.lvm_pve_1742147_387539163: write error failed: No space left on device root@pve:/etc#
 
You can try getting some free space relatively safely by
  • cleaning the package cache from old updates apt autoclean
  • if that's not enough try the more intrusive variant would be apt-get clean
  • if that's still not enough, check /var (especially /var/log) for some data that would not be required anymore, check with
    du -hxd2 /var | sort -h where the 3.8G are actually allocated
 
  • Like
Reactions: southpawtechie826
You can try getting some free space relatively safely by
  • cleaning the package cache from old updates apt autoclean
  • if that's not enough try the more intrusive variant would be apt-get clean
  • if that's still not enough, check /var (especially /var/log) for some data that would not be required anymore, check with
    du -hxd2 /var | sort -h where the 3.8G are actually allocated
The apt-get clean cleared up enough room to lveremove where i was stuck at. Thanks, going to continue down the rest of that list and fix this.
 
The apt-get clean cleared up enough room to lveremove where i was stuck at. Thanks, going to continue down the rest of that list and fix this.
Unfortanly, i cannot continiou past this point because i only gets sh*tt*rs fulll messages.
 
Just went ahead and reinstalled proxmox instead, because i still got stuck by "full drive" problems trying to resize etc.
So now i am at:
root@pve:~# df -h Filesystem Size Used Avail Use% Mounted on udev 7.7G 0 7.7G 0% /dev tmpfs 1.6G 924K 1.6G 1% /run /dev/mapper/pve-root 6.8G 2.5G 4.0G 39% / tmpfs 7.8G 46M 7.7G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/fuse 128M 16K 128M 1% /etc/pve tmpfs 1.6G 0 1.6G 0% /run/user/0 root@pve:~#
 
Hey, you're not alone.. been having same issues with pve-root maxing out.. seems something is getting stuck on recent upgrade... For me ceph log and other logs were HUGE and taking up all the space.. so I deleted the ceph log and removed the osd from that specific node... completed the update and everything looked ok - came back a couple hours later and the root was filling up again..

Feb 14 09:27:18 node5 pveproxy[234018]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1809. Feb 14 09:27:18 node5 pveproxy[234018]: error writing access log Feb 14 09:27:19 node5 pveproxy[234018]: worker exit Feb 14 09:27:19 node5 pveproxy[1146]: worker 234018 finished Feb 14 09:27:19 node5 pveproxy[1146]: starting 1 worker(s) Feb 14 09:27:19 node5 pveproxy[1146]: worker 234028 started Feb 14 09:27:19 node5 pveproxy[234020]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1809. Feb 14 09:27:19 node5 pveproxy[234020]: error writing access log Feb 14 09:27:20 node5 pveproxy[234020]: worker exit Feb 14 09:27:20 node5 pveproxy[1146]: worker 234020 finished Feb 14 09:27:20 node5 pveproxy[1146]: starting 1 worker(s) Feb 14 09:27:20 node5 pveproxy[1146]: worker 234032 started Feb 14 09:27:20 node5 pveproxy[234028]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1809. Feb 14 09:27:20 node5 pveproxy[234028]: error writing access log Feb 14 09:27:21 node5 pveproxy[234028]: worker exit Feb 14 09:27:21 node5 pveproxy[1146]: worker 234028 finished Feb 14 09:27:21 node5 pveproxy[1146]: starting 1 worker(s) Feb 14 09:27:21 node5 pveproxy[1146]: worker 234036 started Feb 14 09:27:21 node5 pveproxy[234024]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1809. Feb 14 09:27:21 node5 pveproxy[234024]: error writing access log Feb 14 09:27:22 node5 pveproxy[234024]: worker exit

workers are starting but not able to do what ever it was doing because no space left... again...

so I did the recommended to save a little more space
  • apt autoclean
  • apt-get clean
then I noticed the freeze up stopped and the error went away and then pmxcfs started receive log

Code:
Feb 14 09:32:12 node5 systemd[237175]: gpgconf: error running '/usr/lib/gnupg/scdaemon': probably not installed
when each new session opened for root user from another node (me looking at that node)
since I don't have smart cards not sure why this is needed...

but I went ahead and installed with apt-get install scdaemon

and that error no longer pops up in the log now..

In any event - doing the apt autoclean and apt-get clean freed up another 2.5GB of space so it seems to be working again but that /HD pve-root is still scary full.

root@node5:~# df -h Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 1000K 1.6G 1% /run /dev/mapper/pve-root 19G 17G 772M 96% / tmpfs 7.9G 66M 7.8G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sda2 511M 324K 511M 1% /boot/efi /dev/fuse 128M 76K 128M 1% /etc/pve 10.0.1.1,10.0.1.2,10.0.1.5,10.0.1.6,10.0.1.7,10.0.90.0:/ 297G 27G 271G 9% /mnt/pve/ISO_store1 tmpfs 1.6G 0 1.6G 0% /run/user/0
 
Last edited:
My pve-root is also full. It was sitting at 100% until I ran the clean command above.
du gives proc files unable to access but other than that it seems normal.

root@fred:/# du -hs * | sort -h du: cannot access 'proc/19345/task/19345/fd/4': No such file or directory du: cannot access 'proc/19345/task/19345/fdinfo/4': No such file or directory du: cannot access 'proc/19345/fd/3': No such file or directory du: cannot access 'proc/19345/fdinfo/3': No such file or directory 0 bin 0 lib 0 lib32 0 lib64 0 libx32 0 proc 0 sbin 0 sys 512 fast 4.0K home 4.0K media 4.0K opt 4.0K srv 4.0K zfs500 16K lost+found 40K tmp 1.9M run 6.9M etc 46M dev 475M boot 573M root 3.9G var 4.1G usr 427G mnt

All mnts are correct and mounted



Code:
root@fred:/mnt/pve# du -hs * | sort -h
4.0K    data
4.0K    fastfred
4.0K    true500
4.0K    ZFS
4.0K    ZFS_backup
4.0K    ZFS_disks
59G     crucial
60G     data250
133G    backup
177G    ssd

buf df still showing 99%?

Code:
root@fred:/mnt/pve# df
Filesystem                          1K-blocks      Used  Available Use% Mounted on
udev                                 32922980         0   32922980   0% /dev
tmpfs                                 6591340      1940    6589400   1% /run
/dev/mapper/pve-root                 57225328  53214104    1071936  99% /
tmpfs                                32956680     46800   32909880   1% /dev/shm
tmpfs                                    5120         0       5120   0% /run/lock
/dev/sdi1                           239253280  61984732  165042288  28% /mnt/pve/data250
/dev/sdc1                           229647672 184623064   33286344  85% /mnt/pve/ssd
/dev/sda1                           245023328  61540060  170963984  27% /mnt/pve/crucial
fast                                 86106240       128   86106112   1% /fast
/dev/fuse                              131072        32     131040   1% /etc/pve
10.160.20.106:/mnt/trueraid/backup 1153739776 139367424 1014372352  13% /mnt/pve/backup
tmpfs                                 6591336         0    6591336   0% /run/user/0

pveversion

Code:
root@fred:/var/log/pveproxy# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-helper: 7.3-4
pve-kernel-5.15: 7.3-2
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-1
lxcfs: 5.0.3-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-2
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

What more can i do?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!