[SOLVED] CT stuck on "create storage snapshot 'vzdump' " when doing a snapshot backup

bootsie123

Member
Dec 29, 2018
50
9
13
Hi everyone! During one of my automated weekly backup jobs, I noticed one of my new containers that I setup this week seems to make the job get stuck when doing a snapshot backup. For some reason, it gets stuck saying INFO: create storage snapshot 'vzdump'. I also tried doing a snapshot backup separately with the same results.

Any ideas on what might be causing this? Thanks!

EDIT:

Looks to be an issue with FUSE enabled on the CT. See this post and this other post on the subject


Backup Logs:
Code:
INFO: starting new backup job: vzdump 106 --storage prox_backup --remove 0 --notes-template '{{guestname}}' --node vmworld --mode snapshot --compress zstd
INFO: Starting Backup of VM 106 (lxc)
INFO: Backup started at 2022-11-13 22:14:54
INFO: status = running
INFO: CT Name: Authentik
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'

CT Config (Ubuntu 22.04):
Code:
arch: amd64
cores: 4
features: fuse=1
hostname: Authentik
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=0A:5C:A6:FC:C2:34,ip=192.168.1.84/24,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-zfs:basevol-134-disk-0/subvol-106-disk-0,size=16G
swap: 0
lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

PVE Versions:
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.64-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-5.15: 7.2-13
pve-kernel-helper: 7.2-13
pve-kernel-5.4: 6.4-18
pve-kernel-5.3: 6.1-6
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.4.189-2-pve: 5.4.189-2
pve-kernel-5.4.189-1-pve: 5.4.189-1
pve-kernel-4.15: 5.4-14
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.15.18-26-pve: 4.15.18-54
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-3
libpve-guest-common-perl: 4.1-4
libpve-http-server-perl: 4.1-4
libpve-network-perl: 0.7.1
libpve-storage-perl: 7.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-3
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-6
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
 
Last edited:

Moayad

Proxmox Staff Member
Staff member
Jan 2, 2020
2,140
173
68
29
Vienna
shop.maurer-it.com
Hi,

All containers do the same behavior, or only the 106?

Do you see anything interesting in the syslog during the backup job?
 

bootsie123

Member
Dec 29, 2018
50
9
13
Good question! It seems to only be CT 106. The only thing I'm seeing in syslog during the backup job is this:

Code:
Nov 14 09:18:32 vmworld pvedaemon[222298]: INFO: starting new backup job: vzdump 106 --compress zstd --mode snapshot --node vmworld --notes-template '{{guestname}}' --remove 0 --storage prox_backup
Nov 14 09:18:32 vmworld pvedaemon[222298]: INFO: Starting Backup of VM 106 (lxc)
 

Moayad

Proxmox Staff Member
Staff member
Jan 2, 2020
2,140
173
68
29
Vienna
shop.maurer-it.com
Hello,

Can you provide us with another CT configuration that works with backup without any issue, pct config <CTID>? In order to compare the configs
 

bootsie123

Member
Dec 29, 2018
50
9
13
Sure!

Code:
arch: amd64
cores: 2
hostname: Heimdall
memory: 2048
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=16:77:88:68:9D:6F,ip=192.168.1.221/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-zfs:subvol-100-disk-0,size=32G
swap: 512
lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

Just checked the configs of all of my CTs and after doing a bit of digging I'm pretty sure this has to due with using having FUSE enabled on the container. This also seems to be a known issue with other posts on the forums talking about it. Initially, I had it enabled because of issues I ran into with Certbot.

Anyways, thanks for pointing me in the right direction! I probably should have started with that first
 

domi2

New Member
Nov 8, 2022
2
0
1
Is there a solution or workaround for this without disabling FUSE? I just noticed that backups are broken for all containers that use this feature.
Is there an open issue for this with LXC or the kernel?

I'm running Docker inside of LXC on ZFS, which requires fuse-overlayfs in order to work.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!