Proxmox 6.0-11 Can't backup CTs, VM backup works fine

StanTastic

New Member
Aug 28, 2019
9
8
3
43
My setup:
- ZFS pool on rust (spinning harddrive) named rpool (also root filesystem)
- ZFS pool on SSD named rpool-ssd
- NFS share on Synology
- I'm using ayufan's patches for diff backup

I have setup my VMs on rpool-ssd, the CTs were initially on rpool, but I migrated them.

CTs are working fine, VMs are working fine.

However:
- I can backup VMs just fine
- none of CTs can be backed up - and the error message doesn't say much:


Code:
2019-11-27 23:39:53 INFO: Starting Backup of VM 150 (lxc)
2019-11-27 23:39:53 INFO: status = stopped
2019-11-27 23:39:53 INFO: backup mode: stop
2019-11-27 23:39:53 INFO: ionice priority: 7
2019-11-27 23:39:53 INFO: CT Name: sql-ct-1804
2019-11-27 23:39:53 INFO: creating archive '/mnt/pve/synology-nfs/dump/vzdump-lxc-150-2019_11_27-23_39_53.tar.gz'
2019-11-27 23:39:53 INFO: tar: /mnt/pve/synology-nfs/dump/vzdump-lxc-150-2019_11_27-23_39_53.tmp: Cannot open: Permission denied
2019-11-27 23:39:53 INFO: tar: Error is not recoverable: exiting now
2019-11-27 23:39:53 ERROR: Backup of VM 150 failed - command 'set -o pipefail && lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar cpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' --one-file-system '--warning=no-file-ignored' '--directory=/mnt/pve/synology-nfs/dump/vzdump-lxc-150-2019_11_27-23_39_53.tmp' ./etc/vzdump/pct.conf ./etc/vzdump/pct.fw '--directory=/mnt/vzsnap0' --no-anchored '--exclude=lost+found' --anchored '--exclude=./tmp/?*' '--exclude=./var/tmp/?*' '--exclude=./var/run/?*.pid' ./ | gzip --rsyncable >/mnt/pve/synology-nfs/dump/vzdump-lxc-150-2019_11_27-23_39_53.tar.dat' failed: exit code 2

Well, technically tar is saying it can't create a file on NFS share, but for VM it's not a problem:
Code:
2019-11-27 23:00:02 INFO: Starting Backup of VM 1001 (qemu)
2019-11-27 23:00:02 INFO: status = stopped
2019-11-27 23:00:02 INFO: update VM 1001: -lock backup
2019-11-27 23:00:02 INFO: backup mode: stop
2019-11-27 23:00:02 INFO: ionice priority: 7
2019-11-27 23:00:02 INFO: VM Name: Win10ProActive
2019-11-27 23:00:02 INFO: include disk 'virtio0' 'zfs-ssd:base-1001-disk-0' 32G
2019-11-27 23:00:02 INFO: creating archive '/mnt/pve/synology-nfs/dump/vzdump-qemu-1001-2019_11_21-23_00_02.vma.gz--differential-2019_11_27-23_00_02.vcdiff'
2019-11-27 23:00:02 INFO: starting template backup
2019-11-27 23:00:02 INFO: /usr/bin/vma create -v -c /mnt/pve/synology-nfs/dump/vzdump-qemu-1001-2019_11_21-23_00_02.vma.gz--differential-2019_11_27-23_00_02.tmp/qemu-server.conf exec:gzip --rsyncable > /mnt/pve/synology-nfs/dump/vzdump-qemu-1001-2019_11_21-23_00_02.vma.gz--differential-2019_11_27-23_00_02.dat drive-virtio0=/dev/zvol/rpool-ssd/base-1001-disk-0
2019-11-27 23:00:03 INFO: progress 0% 0/34359738368 0
2019-11-27 23:00:13 INFO: progress 1% 343605248/34359738368 140926976
2019-11-27 23:00:27 INFO: progress 2% 687210496/34359738368 168079360
2019-11-27 23:00:49 INFO: progress 3% 1030815744/34359738368 175579136
<cut>
2019-11-27 23:11:10 INFO: progress 100% 34359738368/34359738368 22307561472
2019-11-27 23:11:10 INFO: image drive-virtio0: size=34359738368 zeros=22307561472 saved=12052176896
2019-11-27 23:11:11 INFO: archive file size: 5.69GB
2019-11-27 23:11:12 INFO: Finished Backup of VM 1001 (00:11:10)

Because of this I don't think the problem is with NFS, and rather with CT.

I tried to do fsck on an unmouted and stopped CT, here's the result:
Code:
# pct fsck 150
unable to run fsck for 'zfs-ssd:subvol-150-disk-0' (format == subvol)

Weird thing is, all the CT subvolumes are mounted on boot:

Code:
~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16399320k,nr_inodes=4099830,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=3284656k,mode=755)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=43,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=32841)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
rpool-ssd on /rpool-ssd type zfs (rw,xattr,noacl)
rpool-ssd/subvol-152-disk-0 on /rpool-ssd/subvol-152-disk-0 type zfs (rw,xattr,posixacl)
rpool-ssd/subvol-151-disk-0 on /rpool-ssd/subvol-151-disk-0 type zfs (rw,xattr,posixacl)
rpool-ssd/subvol-150-disk-0 on /rpool-ssd/subvol-150-disk-0 type zfs (rw,xattr,posixacl)
rpool-ssd/subvol-153-disk-0 on /rpool-ssd/subvol-153-disk-0 type zfs (rw,xattr,posixacl)
rpool-ssd/subvol-203-disk-0 on /rpool-ssd/subvol-203-disk-0 type zfs (rw,xattr,posixacl)
rpool on /rpool type zfs (rw,noatime,xattr,noacl)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
rpool/data/basevol-150-disk-0 on /rpool/data/basevol-150-disk-0 type zfs (rw,noatime,xattr,posixacl)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
<cut>:/volume1/VMBackups on /mnt/pve/synology-nfs type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.88.249,local_lock=none,addr=192.168.88.245)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=3284652k,mode=700)

What can I do to fix this? What's wrong?
 
set a local tmpdir (in vzdump.conf) that is not on NFS. the backup needs to write as unprivileged user if the container is unprivileged.
 
  • Like
Reactions: StanTastic
Thanks, that worked!
I'm guessing that CT subvols should not be mounted, and I should disable this via ZFS settings?
 
Thanks, that worked!
I'm guessing that CT subvols should not be mounted, and I should disable this via ZFS settings?

they need to be mounted (they get bind-mounted into the container)
 
  • Like
Reactions: StanTastic

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!