Backing up VMs works, CTs fails - NFS share

gargravarr

New Member
Jan 22, 2024
3
0
1
Hi folks,

My setup is 3 nodes with shared storage - an iSCSI LUN with a shared LVM Thin-pool on top. All are running 8.1.4. There is a separate NAS (also Debian 12) providing an NFS share for backups.

Backing up VMs runs successfully. However, I've started using containers and those are not being backed up properly. I see this error all the time:
Code:
INFO: Starting Backup of VM 501 (lxc)
INFO: Backup started at 2024-01-22 06:49:10
INFO: status = running
INFO: CT Name: pihole1
INFO: including mount point rootfs ('/') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: pihole1
INFO: including mount point rootfs ('/') in backup
INFO: temporary directory is on NFS, disabling xattr and acl support, consider configuring a local tmpdir via /etc/vzdump.conf
INFO: starting first sync /proc/2752242/root/ to /mnt/pve/CarbonNFS/dump/vzdump-lxc-501-2024_01_22-06_49_10.tmp
ERROR: Backup of VM 501 failed - command 'rsync --stats -h --numeric-ids -aH --delete --no-whole-file --sparse --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/2752242/root//./ /mnt/pve/CarbonNFS/dump/vzdump-lxc-501-2024_01_22-06_49_10.tmp' failed: exit code 23
INFO: Failed at 2024-01-22 09:32:08

This happens with all my containers.

VMs in the same backup job:
Code:
INFO: Starting Backup of VM 199 (qemu)
INFO: Backup started at 2024-01-22 06:06:19
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: WinServer
INFO: include disk 'sata0' 'SAN:vm-199-disk-0' 64G
INFO: creating vzdump archive '/mnt/pve/CarbonNFS/dump/vzdump-qemu-199-2024_01_22-06_06_19.vma.zst'
INFO: starting kvm to execute backup task
INFO: started backup task 'e6b0f888-5048-4449-8e7c-c9fcf08efc6d'
INFO:   0% (68.0 MiB of 64.0 GiB) in 3s, read: 22.7 MiB/s, write: 5.6 MiB/s
-- SNIP --
INFO: 100% (64.0 GiB of 64.0 GiB) in 41m 59s, read: 37.2 MiB/s, write: 240.0 B/s
INFO: backup is sparse: 57.48 GiB (89%) total zero data
INFO: transferred 64.00 GiB in 2519 seconds (26.0 MiB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 2.57GB
INFO: adding notes to backup
INFO: prune older backups with retention: keep-last=7
INFO: removing backup 'CarbonNFS:backup/vzdump-qemu-199-2024_01_15-04_23_54.vma.zst'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: Finished Backup of VM 199 (00:42:51)

The log files on the NFS share give no further information.

The CT backups also take a very, very long time compared to the VMs, and are in fact still running 8 hours later - these are not big containers, generally 10GB root disks (such as the example above). I can see on the NFS server that there is a vzdump-lxc-503-2024_01_22-07_21_32.tmp/ folder containing the files from the rootFS, which are being rsync'd across.

I'm using NFS v3 for the backup store and forcing all users to an unprivileged one on the NAS; here's the line from /etc/exports:
Code:
/backup/proxmox <IP>(rw,sec=sys,async,no_subtree_check,all_squash,anonuid=<UID>,anongid=<GID>) <IP>(rw,sec=sys,async,no_subtree_check,all_squash,anonuid=<UID>,anongid=<GID>) <IP>(rw,sec=sys,async,no_subtree_check,all_squash,anonuid=<UID>,anongid=<GID>)
The files within the .tmp folder are being created correctly as far as I can see.

I see this is not an unknown error message but I haven't yet figured out what it means.

Thanks in advance,
Gargravarr
 
For privileged LXC and VMs the backup storage needs write accesss for UID 0. For unprivileged LXC it needs write access for UID 0 + UID 100000.
So either make sure your NFS share also allows UID 100000 to write or edit your /etc/vzdump.conf and set the tmpdir to something like "/tmp" where both UIDs are allowed to write.
 
Last edited:
For privileged LXC and VMs the backup storage needs write accesss for UID 0. For unprivileged LXC it needs write access for UID 0 + UID 100000.
So either make sure your NFS share also allowes UID 100000 to write or edit your /etc/vzdump.conf and set the tmpdir to something like "/tmp" where both UIDs are allowed to write.
The containers are all unprivileged, but the NFS server is forcing all reads/writes to a UID/GID that has full permissions on that folder, so it shouldn't (in theory) be a permissions issue.

Guess I could change the temp location and see if it helps.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!