/var/tmp filled with pveupload files

lk777

Member
Oct 27, 2021
37
2
13
I have got the following message during unsuccessful upload some iso files:

Code:
pveproxy[44053]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm

The pve node summary indicated that my / HD space is full

After running du -hsx I have found that /var/tmp was filled with the pveupload-* files. Restart didn't clean it. I have deleted them manually. After that I was able to upload an ISO file and /vat/tmp was still empty. Does this folder keep only failed uploads?

And in /root folder I have found debian-11-genericcloud-amd64-20211011-792.raw. I imported debian-cloud image file to the ZFS /pool storage. Why was this raw file created under the /root folder? It doesn't seem right. Checked ZFS storage and all VM Disks are in the right place.


UPDATE:

My debian-11-genericcloud-amd64-20211011-792.raw question: I believe it was downloaded there (/root) with the wget command. This is another folder that has to be monitored for the storage usage.
 
Last edited:
hi,

Does this folder keep only failed uploads?
yes that should be the case. normally they're cleaned if you cancel, but i guess not when it was done abruptly or failed for some other reason.

This is another folder that has to be monitored for the storage usage.
it should be already mounted as your root filesystem (/dev/mapper/pve-root in case of the default lvm-thin)
 
I had a different symptom, but this post helped me resolve the issue. I figured I'd reply in case it helps someone else.

When attempting to do an apt update, I ran into a string of "Error writing to file..." responses:

Bash:
root@pve2:/# apt update
Get:1 http://ftp.debian.org/debian bullseye InRelease [116 kB]
Err:1 http://ftp.debian.org/debian bullseye InRelease                                   
  Error writing to file - write (28: No space left on device) [IP: 151.101.190.132 80]
Get:2 http://security.debian.org/debian-security bullseye-security InRelease [48.4 kB]   
Err:2 http://security.debian.org/debian-security bullseye-security InRelease                                                 
  Error writing to file - write (28: No space left on device) [IP: 151.101.190.132 80]
Get:3 http://ftp.debian.org/debian bullseye-updates InRelease [44.1 kB]                                                       
Err:3 http://ftp.debian.org/debian bullseye-updates InRelease                                     
  Error writing to file - write (28: No space left on device) [IP: 151.101.190.132 80]
Get:4 http://download.proxmox.com/debian/pve bullseye InRelease [2,661 B]
Err:4 http://download.proxmox.com/debian/pve bullseye InRelease
  Error writing to file - write (28: No space left on device) [IP: 144.217.225.162 80]


I ran a number of ls, df & du commands with a lot of different switches based on other similar posts here in the forums and on Reddit.

A solution that worked for many others was running the df -ih command to show the inodes. If they were maxed, you found the issue. In my case though, I was only using 8% of available inodes. It was effectively the opposite (emphasis mine).
Bash:
root@pve2:/var2/tmp# [I]df -ih[/I]
Filesystem                           Inodes IUsed IFree IUse% Mounted on
udev                                    16M   668   16M    1% /dev
tmpfs                                   16M  1.1K   16M    1% /run
[I][B]/dev/mapper/pve-root                   789K   56K  733K    8% /[/B][/I]
tmpfs                                   16M    94   16M    1% /dev/shm
tmpfs                                   16M    13   16M    1% /run/lock
nvme2                                  3.7G     6  3.7G    1% /nvme2
ssd-mirror2                            7.1G     6  7.1G    1% /ssd-mirror2
hdd-rw-cache2                           15G     6   15G    1% /hdd-rw-cache2
/dev/fuse                              256K    51  256K    1% /etc/pve
192.168.31.31:/volume2/vm_ct_backups      0     0     0     - /mnt/pve/vm_ct_backups
192.168.31.31:/volume2/vm_ct_nfs          0     0     0     - /mnt/pve/vm_ct_nfs
tmpfs                                  3.2M    18  3.2M    1% /run/user/0


I then ran the du -sh command and it showed:

Code:
root@pve2:/# du -sh * .[!.]*  # to show hidden files that might be chewing up space on /
0       bin
91M     boot
48M     dev
4.8M    etc
512     hdd-rw-cache2
4.0K    home
0       lib
0       lib32
0       lib64
0       libx32
16K     lost+found
4.0K    media

Not all the directories were shown. The output below shows the progression of the du -sh variants that led the to the discovery that the /var directory was 9 GB in size. This stackoverflow post was a great help.

Bash:
root@pve2:/# [I]du -sh * | sort -h[/I]
^C
root@pve2:/# [I]du -cksh | sort -rn[/I]
du: cannot access './proc/11854/task/11854/fd/3': No such file or directory
du: cannot access './proc/11854/task/11854/fdinfo/3': No such file or directory
du: cannot access './proc/11854/fd/4': No such file or directory
du: cannot access './proc/11854/fdinfo/4': No such file or directory
1.3T    total
1.3T    .
root@pve2:/#[I] du -sh $(ls -A) | sort -h[/I]
du: cannot access 'proc/12058/task/12058/fd/4': No such file or directory
du: cannot access 'proc/12058/task/12058/fdinfo/4': No such file or directory
du: cannot access 'proc/12058/fd/3': No such file or directory
du: cannot access 'proc/12058/fdinfo/3': No such file or directory
0       bin
0       lib
0       lib32
0       lib64
0       libx32
0       proc
0       sbin
0       sys
512     hdd-rw-cache2
512     nvme2
512     ssd-mirror2
4.0K    home
4.0K    media
4.0K    opt
4.0K    srv
16K     lost+found
48K     tmp
52K     root
1.4M    run
4.8M    etc
48M     dev
91M     boot
2.7G    usr
[B][I]9.0G    var[/I][/B]
1.3T    mnt



I dug into /var and ran "root@pve2:/var2# ls -alh * .[!.]*" which gave me the below output (excerpt from complete response):

Bash:
tmp:
total 8.4G
drwxrwxrwt  5 root     root     4.0K Apr 26 15:40 .
drwxr-xr-x 11 root     root     4.0K Nov 22 02:29 ..
-rw-r--r--  1 root     root       16 Mar 14 10:33 pve-reserved-ports
[I][B]-rw-------  1 www-data www-data 8.4G Mar 13 23:21 pveupload-56323cca40176a004420d5cffcceadd2[/B][/I]
[B][I]-rw-------  1 www-data www-data    0 Mar 13 23:40 pveupload-b002021d40ba051b486dc19e2ff6588c[/I][/B]
drwx------  3 root     root     4.0K Apr 26 15:40 systemd-private-02f50a9957314462a49cebafb26efad6-chrony.service-f6CHBg
drwx------  3 root     root     4.0K Apr 26 15:40 systemd-private-02f50a9957314462a49cebafb26efad6-corosync.service-GOnRlj
drwx------  3 root     root     4.0K Apr 26 15:40 systemd-private-02f50a9957314462a49cebafb26efad6-systemd-logind.service-bOJBsi

The resolution that worked for me was using scp to remove transfer the failed upload files and then delete them using "rm -rf pveupload-*"

To verify, I ran df -kh and the /dev/mapper/pve-root folder is now at 30%, and apt update works as expected:

Bash:
root@pve2:/var2/tmp# [I]df -kh[/I]
Filesystem                            Size  Used Avail Use% Mounted on
udev                                   63G     0   63G   0% /dev
tmpfs                                  13G  9.4M   13G   1% /run
[B][I]/dev/mapper/pve-root                   13G  3.4G  8.1G  30% /[/I][/B]
tmpfs                                  63G   48M   63G   1% /dev/shm
tmpfs                                 5.0M     0  5.0M   0% /run/lock
nvme2                                 1.9T  128K  1.9T   1% /nvme2
ssd-mirror2                           3.6T  128K  3.6T   1% /ssd-mirror2
hdd-rw-cache2                         7.2T  128K  7.2T   1% /hdd-rw-cache2
/dev/fuse                             128M   28K  128M   1% /etc/pve
192.168.31.31:/volume2/vm_ct_backups   44T   29T   16T  65% /mnt/pve/vm_ct_backups
192.168.31.31:/volume2/vm_ct_nfs       44T   29T   16T  65% /mnt/pve/vm_ct_nfs
tmpfs                                  13G     0   13G   0% /run/user/0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!