[SOLVED] Container with bind mount backup error (unable to parse volume ID)

juliokele

Renowned Member
Nov 25, 2016
100
28
68
Hi folks,

i got following error (unable to parse volume ID) after pve update:

Task Log:
Code:
INFO: starting new backup job: vzdump 100 --remove 0 --storage backups --compress zstd --node pve --mode snapshot
INFO: filesystem type on dumpdir is 'zfs' -using /var/tmp/vzdumptmp11973_100 for temporary files
INFO: Starting Backup of VM 100 (lxc)
INFO: Backup started at 2020-11-20 09:42:16
INFO: status = running
INFO: CT Name: dc
INFO: including mount point rootfs ('/') in backup
INFO: excluding bind mount point mp0 ('/samba/fileserver') from backup (not a volume)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
found lock 'snapshot' trying to remove 'backup' lock
ERROR: Backup of VM 100 failed - unable to parse volume ID '/storage/fileserver'
INFO: Failed at 2020-11-20 09:42:16
INFO: Backup job finished with errors
TASK ERROR: job errors

APT Upgrade Log:
Code:
Start-Date: 2020-11-19  20:05:17
Commandline: apt dist-upgrade -y
Requested-By: xyz (1000)
Install:
pve-kernel-5.4.73-1-pve:amd64 (5.4.73-1, automatic)
Upgrade:
pve-kernel-5.4:amd64 (6.2-7, 6.3-1)
libldap-2.4-2:amd64 (2.4.47+dfsg-3+deb10u3
2.4.47+dfsg-3+deb10u4)
pve-container:amd64 (3.2-2, 3.2-3)
zfs-zed:amd64 (0.8.4-pve2, 0.8.5-pve1)
zfsutils-linux:amd64 (0.8.4-pve2, 0.8.5-pve1)
libzfs2linux:amd64 (0.8.4-pve2, 0.8.5-pve1)
libldap-common:amd64 (2.4.47+dfsg-3+deb10u3, 2.4.47+dfsg-3+deb10u4)
qemu-server:amd64 (6.2-19, 6.2-20)
libproxmox-backup-qemu0:amd64 (0.7.1-1, 1.0.0-1)
pve-kernel-helper:amd64 (6.2-7, 6.3-1)
libzpool2linux:amd64 (0.8.4-pve2, 0.8.5-pve1)
libnvpair1linux:amd64 (0.8.4-pve2, 0.8.5-pve1)
libuutil1linux:amd64 (0.8.4-pve2, 0.8.5-pve1)
End-Date: 2020-11-19  20:06:17

PVE Versions:
Code:
proxmox-ve: 6.2-2 (running kernel: 5.4.73-1-pve)
pve-manager: 6.2-15 (running version: 6.2-15/48bd51b6)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ifupdown2: not correctly installed
ksmtuned: 4.20150325+b1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-4
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-10
pve-cluster: 6.2-1
pve-container: 3.2-3
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-6
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-20
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

Container Config:
Code:
mp0: /storage/fileserver,mp=/samba/fileserver,acl=1
arch: amd64
cores: 2
hostname: dc
memory: 1024
net0: name=eth0,bridge=vmbr1,gw=192.168.xx.yyy,hwaddr=C6:47:B2:7F:xx:yy,ip=192.168.xx.yy/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-100-disk-0,size=5G
swap: 512
 
Last edited:
I can confirm this Bug.

Code:
pveversion -v
proxmox-ve: 6.2-2 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-15 (running version: 6.2-15/48bd51b6)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-4
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-10
pve-cluster: 6.2-1
pve-container: 3.2-3
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-6
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-20
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
I am also experiencing this bug under the same circumstances as reported above, I took the same set of updates at the same time. I got a mix of backup failures and successes, and it looks like the backups which fail are the backups where A) the container is running and B) the container has local folder path mounts.

I am unable to use the workaround mentioned by matrix, as several of my VMs mount a ZFS subvolume which is mounted at the root level of my filesystem. i.e. I have a zpool called 'tank', a subvolume at 'tank/my-subvolume', and the folder path on the node to the subvolume is '/my-subvolume'. I don't see any way to make Proxmox aware of the subvolume, and attempting to reference it as 'local:/my-subvolume' is a nogo.
 
Thanks for the tip! It was driving me nuts ...but after downgrading pve-container as suggested ("apt install pve-container=3.2-2") my systems are now happy again. Phew!
 
  • Like
Reactions: loomes
  • Like
Reactions: Ansy and andaris

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!