Hi!
I've created my LXC in a folder (/var/lib/lxc/218), so I could use the full capacity of the host's disk:
Now, when I try to back it up, I'm getting the "unable to parse volume ID '/var/lib/lxc/218'" error (i've stopped the LXC to do the backup):
Here is the configuration:
Did i hit a bug?
I've created my LXC in a folder (/var/lib/lxc/218), so I could use the full capacity of the host's disk:
Code:
pct create 218 /var/lib/vz/template/cache/oracle-linux-8-20220719.tar.xz --rootfs=/var/lib/lxc/218 --hostname w18 --net0 name=eth0,ip=10.0.0.18/24,gw=10.0.0.1,bridge=vmbr1 --memory 7196 --swap 1024 --onboot 1 --features fuse=1,mount=nfs,nesting=1
Now, when I try to back it up, I'm getting the "unable to parse volume ID '/var/lib/lxc/218'" error (i've stopped the LXC to do the backup):
Code:
root@pve7:~# vzdump 218 --node pve7 --remove 0 --compress gzip --mode stop --storage local
INFO: starting new backup job: vzdump 218 --mode stop --remove 0 --compress gzip --storage local --node pve7
INFO: Starting Backup of VM 218 (lxc)
INFO: Backup started at 2022-07-30 16:29:07
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: w18
INFO: including mount point rootfs ('/') in backup
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-218-2022_07_30-16_29_07.tar.gz'
ERROR: Backup of VM 218 failed - unable to parse volume ID '/var/lib/lxc/218'
INFO: Failed at 2022-07-30 16:29:07
INFO: Backup job finished with errors
job errors
Here is the configuration:
Code:
root@pve7:/etc/pve/lxc# cat 218.conf
arch: amd64
features: fuse=1,mount=nfs,nesting=1
hostname: w18
memory: 7196
net0: name=eth0,bridge=vmbr1,gw=10.0.0.1,hwaddr=02:8D:72:C4:AB:A0,ip=10.0.0.18/24,type=veth
onboot: 1
ostype: centos
rootfs: /var/lib/lxc/218
swap: 1024
root@pve7:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.4-4 (running version: 6.4-4/337d6701)
pve-kernel-5.4: 6.4-1
pve-kernel-helper: 6.4-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-2
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-1
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-3
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-1
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
Did i hit a bug?
Last edited: