Backup Job Error - No space left on device

HomerJ.S.

New Member
Aug 7, 2024
16
1
3
Hi Community,

I recently set up a PBS, my PVE runs since a few months and now it´s time for reliable backups. So I brought my old ubuntu server to a new life, here the specs:

Intel(R) Celeron(R) CPU G3900
4 GB RAM
System SSD 120 GB
2 8 TB HDD
(additional 3 TB installed after errors came up)

The whole system is old, but no errors till now. The SMART values of the hard drives pass.

Here a list of my VMs and container:
VM - OpenMediaVault - 4 TB
CT - Openhab - 60 GB
CT - Debian with mosqitto - 8 GB
CT - Debian with paperless - 13 GB
CT - Debian with NGX - 6 GB
CT - Debian with Nextcloud - 1 TB
CT - Debian with PhotoPrism - 1 TB
CT - Debian with Immich - 100 GB
VM - Debian with TVHeadend - 1 TB

I could backup all of this VMs and CTs on a HDD on the PVE server.

I installed the PBS, connected the 8 TB drives to my PVE and started the backup jobs. Most of the backups run without errors, but 2 (nextcloud and photoprism) bring only an error.

Here the log of one of the error backup jobs:

Code:
INFO: starting new backup job: vzdump 106 --notes-template '{{guestname}}' --fleecing 0 --node proxmox --mode snapshot --storage PBS-Backup-HD3-3TB --all 0
INFO: Starting Backup of VM 106 (lxc)
INFO: Backup started at 2025-01-05 12:06:22
INFO: status = running
INFO: CT Name: DebianPhotoPrism
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/mnt/daten') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: DebianPhotoPrism
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/mnt/daten') in backup
INFO: starting first sync /proc/4639/root/ to /var/lib/vz/tmp_backup/vzdumptmp140451_106/
ERROR: rsync: [receiver] write failed on "/var/lib/vz/tmp_backup/vzdumptmp140451_106/mnt/daten/photoprism/originals/Bilder/Gallerie/Urlaube/2024 \#303\#204gypten/DSC00851.jpg": No space left on device (28)
ERROR: rsync error: error in file IO (code 11) at receiver.c(380) [receiver=3.2.7]
ERROR: rsync: [sender] write error: Broken pipe (32)
ERROR: Backup of VM 106 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --sparse --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/4639/root//./ /proc/4639/root//./mnt/daten /var/lib/vz/tmp_backup/vzdumptmp140451_106/' failed: exit code 11
INFO: Failed at 2025-01-05 12:13:39
INFO: Backup job finished with errors
TASK ERROR: job errors

Error code 28 - no space left on device: The target HDD on the PBS has a size of 8 TB and is empty.
Error code 11- My local disk on PVE has a free space of 69 GB, the PBS local disk 80 GB. I followed this thread, without success:
https://forum.proxmox.com/threads/lxc-backup-fails-with-rsync-exit-code-11.120667/

So I have enough space on the disks, the local backup on PVE work without a problem, the big VM backup of the OMV (4TB) worked!!! Is there a difference between a local PVE backup and an PBS backup, or is there a difference between a PBS VM backup and a PBS CT backup!

Is the local SSD of my PVE to small? It has only 100 GB, with 69 free, but I do not remember to set this size. Should I try to resize this space?


I´m grateful for every tips!!!

Thanks Alex
 
Hello

I installed a new PBS 4 days ago. I have the same problem, I was able to backup my LXC103 once via PBS, on the same day at night I ran into the same error. The size of the system disk only changed by a few MB between the 1st and 2nd backup. I also have several TB free on my PBS, but the LXC system disk is only 400GB in size, so it should be anything but a space problem.

the other VMs and LXC are currently not affected.

Code:
ERROR: rsync: [receiver] write failed on "/var/tmp/vzdumptmp1333869_103/data/compose/1/storage/cache/thumbnails/1/3/f/13da2967c8d9efb6a97_1280x1024_fit.jpg": No space left on device (28)
ERROR: rsync error: error in file IO (code 11) at receiver.c(380) [receiver=3.2.7]
ERROR: rsync: [sender] write error: Broken pipe (32)
ERROR: rsync error: error in file IO (code 11) at io.c(848) [sender=3.2.7]
ERROR: Backup of VM 103 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --sparse --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/1080/root//./ /var/tmp/vzdumptmp1333869_103' failed: exit code 11
INFO: Failed at 2025-01-14 17:50:09
INFO: Backup job finished with errors
TASK ERROR: job errors

Node Vers: Linux **-2 6.8.12-5-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-5 (2024-12-03T10:26Z) x86_64 (was installed this weekend an is uptodate)
PBS: 3.3.0 also installed this weekend and is uptodate

Best regards Fire
 
Hi,
from the log it seems like you are using suspend mode backup which uses a temporary directory (tmpdir in the vzdump settings) and that is /var/tmp/ by default.
 
Hi,
finally I could fix the problem. As Fiona mentioned the backup job generates a temporary "file" and on my system the local ssd is to small. So it run full and the backup job failed. At first I moved the temp directory to one of my data hdd and so there was enough space for the temp file. As in the post
https://forum.proxmox.com/threads/lxc-backup-fails-with-rsync-exit-code-11.120667/
suggested, I edited the

Code:
/etc/vzdump.conf

file and changed the line:

Code:
tmpdir:

entry to a local path on the hdd.

And finally I installed an old ssd additionally to the system, just for the backup cache with this entry:

Code:
tmpdir: /mnt/pve/Daten3-12TB/BackupCacheDir/

Now the temp file is generated on the second ssd and the jobs work without an issue.


@fireboyff I hope that will help you.
@fiona Thank you for the hint!
 
Hi Guys,

i encountered the same problem. I'm trying to backup a container with ~400GB of mostly static data so the snapshots should be pretty small. My local disk has about 85GB free for temporary files. How can i limit the size of temp data? Adding another drive is not possible in my setup.
 
Hi,
i encountered the same problem. I'm trying to backup a container with ~400GB of mostly static data so the snapshots should be pretty small. My local disk has about 85GB free for temporary files. How can i limit the size of temp data? Adding another drive is not possible in my setup.
to avoid the need for the temporary directory, you can either use a storage that supports snapshots or do stop mode backups. If your backup task log shows
Code:
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
then you are not currently making use of snapshots for backup.
 
Hi Fiona,

thanks for the clarification. I did some googling (which i should have done beforehand) and understand that NFS does not support snapshots for containers. However i switched to a iSCSI backed shared LVM and still have the same problem. According to this table this should work, right?
https://pve.proxmox.com/wiki/Storage

Initially i came from a ZFS storage where the container was created, if this helps.

Best regards
 
Unfortunately, LVM for containers does also not support snapshots. The docs say:
4: Since Proxmox VE 9, snapshots as a volume chain have been available for VMs.These snapshots use separate volumes for the snapshot data and layer them. Formore details, see the description for snapshot-as-volume-chain in the LVM configuration section.
 
  • Like
Reactions: Johannes S
A short CTRL + F search for the word container found no mention that there is a difference between snapshot capability per storage for VMs vs. containers. I bought a new central storage based on the table found in the docs, mentioning snapshot support for both NFS and LVM. Thats annoying.
Which storage types *do* support Snapshots for containers so i maybe switch over to that? Otherwise i have to migrate about a 100 containers into VMs.
 
The footnote clearly says "for VMs". The other footnote clarifies that it's only for qcow2 format. This cannot be used for containers either. Those storages where there is no footnote for the snapshot functionality support snapshots for containers too.
 
Ok, thanks for the clarification. IMO the footnotes should be made more clear like (Yes for VM, no for CT) as this could (and at least on my side did) cause confusion and possibly the wrong decisions. I guess i'll have to find another solution. I am a big fan of the container concept but missing snapshot functionality on almost all shared storage types except Ceph is a dealbreaker.