[SOLVED] No space left on device

thera314

New Member
Jun 15, 2018
7
0
1
46
Hello,

I can't find a way to gain space on my Proxmox server.

My containers are working properly but I can't do any update:

apt-get update
Code:
 Error writing to output file - write (28: No space left on device) Error writing to file - write (28: No space left on device)

Syslog
Code:
Jun 14 00:57:44 climaxweb pveproxy[1749]: worker 12314 started
Jun 14 00:57:44 climaxweb pveproxy[1749]: worker 12315 started
Jun 14 00:57:44 climaxweb pveproxy[12312]: Warning: unable to close filehandle GEN4 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1572.
Jun 14 00:57:44 climaxweb pveproxy[12312]: error writing access log
Jun 14 00:57:44 climaxweb pveproxy[12312]: worker exit
Jun 14 00:57:44 climaxweb pveproxy[1749]: worker 12312 finished
Jun 14 00:57:44 climaxweb pveproxy[1749]: starting 1 worker(s)
Jun 14 00:57:44 climaxweb pveproxy[1749]: worker 12317 started
Jun 14 00:57:44 climaxweb pveproxy[12315]: Warning: unable to close filehandle GEN4 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1572.
Jun 14 00:57:44 climaxweb pveproxy[12315]: error writing access log

Here is the result of a df -h :

Code:
Filesystem      Size  Used Avail Use% Mounted on
udev             16G     0   16G   0% /dev
tmpfs           3.2G  314M  2.9G  10% /run
/dev/md1        467G  455G     0 100% /
tmpfs            16G   34M   16G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/md0        282M   53M  210M  20% /boot
/dev/fuse        30M   20K   30M   1% /etc/pve
tmpfs           3.2G     0  3.2G   0% /run/user/1000

I do not understand why my /dev/md1 indicates 100% of use (cross-multiplication should indicates 97% isn't it ?).

I tried to erase some containers but, unfortunately, the available space stays at zero...

Do you have an idea / trick to resolve this weird problem ?

Kindest regards,

Rémi
 
you could use 'du' or 'ncdu' to see which are the biggest files and delete them
 
Thanks for your help Dominik.

Unfortunately, even if I a delete a file (it was the case with a 16Gb Debian container), the "Available space" stays at "0". And I do not understand why ..
 
If after deletion disk space does not become available, then some process must be accessing that file.
Try: "lsof -n| grep FILENAME" to find out, what process access it, and restart this process.
If lsof is not available, then "apt install lsof"
 
Hi,

Take in account that on most linux fs (ext*) around 2% of total space is reserved for root only. So delete as many file you can, and after that try to restart yours CTs.
 
Hi,

Thanks gallew for your assistance. No luck, the problem continues to occur even after a reboot.

Thanks gultez for this idea. I've deleted all the files that I can but I still can't do any updates. I can create files manually, I can wget files in my "/dev/md1" partition, but no apt-get.

The thing that I can't understand is that a df -h gives :

Code:
Filesystem      Size  Used Avail Use% Mounted on
udev             16G     0   16G   0% /dev
tmpfs           3.2G  306M  2.9G  10% /run
/dev/md1        467G  455G     0 100% /
tmpfs            16G   34M   16G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/md0        282M   53M  210M  20% /boot
/dev/fuse        30M   20K   30M   1% /etc/pve
tmpfs           3.2G     0  3.2G   0% /run/user/1000

Why 100% of available space for /dev/md1 ? 455G used on a total of 467G should give 97%, with 12G of free space to run my updates.

I think that apt-get reads this "100%" value and refuses to do the updates, even if there are 12G of free space. I would like to find a way to work around this issue.

Thanks for your kind assistance,

Rémi
 
Hi,

Thanks gallew for your assistance. No luck, the problem continues to occur even after a reboot.

Thanks gultez for this idea. I've deleted all the files that I can but I still can't do any updates. I can create files manually, I can wget files in my "/dev/md1" partition, but no apt-get.

The thing that I can't understand is that a df -h gives :

Code:
Filesystem      Size  Used Avail Use% Mounted on
udev             16G     0   16G   0% /dev
tmpfs           3.2G  306M  2.9G  10% /run
/dev/md1        467G  455G     0 100% /
tmpfs            16G   34M   16G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/md0        282M   53M  210M  20% /boot
/dev/fuse        30M   20K   30M   1% /etc/pve
tmpfs           3.2G     0  3.2G   0% /run/user/1000

Why 100% of available space for /dev/md1 ? 455G used on a total of 467G should give 97%, with 12G of free space to run my updates.

I think that apt-get reads this "100%" value and refuses to do the updates, even if there are 12G of free space. I would like to find a way to work around this issue.

Thanks for your kind assistance,

Rémi
... like I said do not count of free space under < 3%. Think that you have a file with size = 2 k, but for hdd perspective the space usage is 4k (this is minimum).
 
What filesystem type do you have on /dev/md1? Is it BTRFS by chance? If it is and you're using snapshots, they will continue to reference any files you delete from the live filesystem. So to clear space you would need to remove files from the live filesystem AND delete any snapshots that were made when the files existed.

If this is an ext3/4 filesystem, then your symptoms are a mystery. It's possible to run into these types of issues under LVM with snapshots but I don't see any indication that you're using LVM.
 
If you're deleting files but they're not freeing up space its either because you're using a cow file system (zfs) or you're deleting them from another (mounted) filesystem.

to ignore other filesystems use:

du -hsx /*

then drill down to the directories that appear to be oversized and repeat, eg

du -hsx /var/*

until you find what killed your free space. the most obvious places to start looking into is /var and /home (/root if you're logging in that way; if you do- stop that)

A suggestion to the proxmox devs- make /var and /home their own partitions on default install.
 
... like I said do not count of free space under < 3%. Think that you have a file with size = 2 k, but for hdd perspective the space usage is 4k (this is minimum).

Thanks guletz. It's maybe that.. I will try to backup then delete another (huge) container to see if the magic happens. :)
 
If this is an ext3/4 filesystem, then your symptoms are a mystery. It's possible to run into these types of issues under LVM with snapshots but I don't see any indication that you're using LVM.

My /dev/md1 is an ext3/4 filesystem. I'm not using LVM. Thanks!
 
If you're deleting files but they're not freeing up space its either because you're using a cow file system (zfs) or you're deleting them from another (mounted) filesystem.

to ignore other filesystems use: ...

Thanks for your help. My containers (1 raw & 2 qcow2) are taking up all the space in /var/lib/vz/images/ ... The problem is that I need to shrink them.. and it seems to be a difficult task.

Kindest regards,

Rémi
 
Thanks Guletz for your previous reply : I backup, shrink then restore / reinstall my Windows Server 2K16. With 80Gb left, it's now working perfectly :)

Thread is solved.

Kindest regards,

Rémi
 
Good to know. You can optimise your vHDD(zfs volblocksize) if you know how big are most of your files(many small files -> 8-16 K, bigger files 32-64K, and so on)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!