Problems backing up a LXC after resizing

MasterCodeFu

New Member
Apr 7, 2024
21
0
1
Hi,
Anyone have any ideas why when after i resized a lxc running Debian 13 proxmox backup fails.

It's as if the space has not resized in the Debian container and me being new to Linux it's probably me missing something . Error is below.

From the web interface the lxc in question i can see under status

Bootdisk size
65.75% (64.35 GiB of 97.87 GiB)


No space left on device (28)
118: 2025-09-10 18:18:38 ERROR: rsync error: error in file IO (code 11) at receiver.c(381) [receiver=3.4.1]
118: 2025-09-10 18:18:38 ERROR: rsync: [sender] write error: Broken pipe (32)
118: 2025-09-10 18:18:46 ERROR: Backup of VM 118 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --sparse --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/3726308/root//./ /var/tmp/vzdumptmp3773999_118' failed: exit code 11
 
Just running df in the one of the lxc in question

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/STORAGE--NAME-vm--118--disk--1 102626412 67488376 29878772 70% /
none 492 4 488 1% /dev
udev 131999780 0 131999780 0% /dev/tty
tmpfs 132037816 0 132037816 0% /dev/shm
tmpfs 52815128 96 52815032 1% /run
tmpfs 5120 0 5120 0% /run/lock
tmpfs 132037816 0 132037816 0% /tmp
 
Does the CT you're backing up support snapshots? Check its snapshot tab and if not share Resources.
 
Does the CT you're backing up support snapshots? Check its snapshot tab and if not share Resources.
Apologies on the slow reply, It's been a busy week. So prior to me upgrading the 2 CT in questions backup completed without any issues both are running Debian 13 and updating with any noticeable issues. If i shut them down they do complete so yes does now look like a snapshot issue. I have increased space on the containers which is the only difference compared to other ct using exact same path so is why is confusing me. The only difference is that i have increased them both. All are up to date as of this morning. So strange.....
 
Please share the Resources tab. I'm guessing you're using file based disks for the CT.
 
Okay. No file based disk. Might be LVM rather than LVM-Thin though. Can I see cat /etc/pve/storage.cfg too?
 
Okay. No file based disk. Might be LVM rather than LVM-Thin though. Can I see cat /etc/pve/storage.cfg too?
What part do u need to see as Theres some details in there like fingerprint etc that don't want to share?

dir: local
path /var/lib/vz
content backup,vztmpl,iso

lvm: STORAGE-A-SSD
vgname STORAGE-A-SSD
content rootdir,images
nodes SERVER1
shared 0
 
Last edited:
This is all I need. Yeah it's just LVM, not LVM-Thin. This provides no snapshot ability :(
Without that you need temporary space to create a consistend backup. This is not needed if the CT is stopped so the workaround is to change the mode.
Maybe you can create a thin pool on the VG and migrate the guests there? Why did you choose LVM over LVM-Thin?
 
Last edited:
This is all I need. Yeah it's just LVM, not LVM-Thin. This provides no snapshot ability :(
Without that you need temproary space to create a consistend backup. This is not needed if the CT is stopped. So the workaound is to change the mode.
Maybe you can create a thin pool on the VG and migrate the guests there? Why did you choose LVM over LVM-Thin?
Am really new to linux. I'm open to change if there a way to change or is it move them to a diff storage array