Proxmox 6.1-7 : pct restore unable to parse volume ID

adofou

Member
Mar 14, 2020
7
1
23
34
Hello,

I currently moving CT between two servers with Proxmox in 6.1 version.
Since three day, after last atp update & apt upgrade, I can't restore any CT on my new server.
Each time, that's ended on this error : " unable to restore CT XXX - unable to parse volume ID 'vzdump-lxc-XXX-XXXX.tar.gz'"


root@yugo:/louise# pct restore 106 vzdump-lxc-203-2020_03_11-04_46_38.tar.gz -storage sata-thinpool -unprivileged 0 -rootfs 100 -bwlimit 800000
Logical volume "vm-106-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 26214400 4k blocks and 6553600 inodes
Filesystem UUID: 81674420-9969-45c1-ae65-796034e768b8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/louise/vzdump-lxc-203-2020_03_11-04_46_38.tar.gz'
Total bytes read: 1347225600 (1.3GiB, 143MiB/s)
Detected container architecture: amd64
Logical volume "vm-106-disk-0" successfully removed
unable to restore CT 106 - unable to parse volume ID 'vzdump-lxc-203-2020_03_11-04_46_38.tar.gz'

I have try make GZ or LZO backup on source server without any difference.
I have reboot my new server this night and the error continue.

By curiosity, I have try to restore one backup create by the new server this night, with a CT already migrate.
... and the restore failed too, with the same error :(
I have try change of CT number or change Thinpool LVM destination, with the same error.

root@yugo:/backup/dump# pct restore 107 vzdump-lxc-103-2020_03_14-05_31_06.tar.gz -storage sata-thinpool -unprivileged 0 -rootfs 100 -bwlimit 800000
Logical volume "vm-107-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 26214400 4k blocks and 6553600 inodes
Filesystem UUID: d8f52a12-232e-4560-b158-4cb331b2857a
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/backup/dump/vzdump-lxc-103-2020_03_14-05_31_06.tar.gz'
Total bytes read: 1090027520 (1.1GiB, 153MiB/s)
Detected container architecture: amd64
Logical volume "vm-107-disk-0" successfully removed
unable to restore CT 107 - unable to parse volume ID 'vzdump-lxc-103-2020_03_14-05_31_06.tar.gz'

Each help about this error seems be linked to ZFS, but I not use ZFS on my both servers.
I use LVM on SATA RAID or NVME RAID (soft raid with madm).

root@yugo:/backup/dump# pveversion --verbose
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-5
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-21
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
pve-zsync: 2.0-2
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

LVM :
root@yugo:/backup/dump# pvs
PV VG Fmt Attr PSize PFree
/dev/md127 sata lvm2 a-- <3.64t 6.77g
/dev/md4 nvme lvm2 a-- <399.68g 3.70g
root@yugo:/backup/dump# vgs
VG #PV #LV #SN Attr VSize VFree
nvme 1 2 0 wz--n- <399.68g 3.70g
sata 1 12 0 wz--n- <3.64t 6.77g
root@yugo:/backup/dump# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
nvme-thinpool nvme twi-aotz-- 395.00g 36.17 15.18
temp nvme Vwi-aotz-- 150.00g nvme-thinpool 95.25
backup sata Vwi-aotz-- 1.00t sata-thinpool 95.30
sata-data sata Vwi-aotz-- 20.00g sata-thinpool 2.23
sata-thinpool sata twi-aotz-- 3.63t 42.83 19.46
snap_vm-300-disk-0_before_upgrade sata Vri---tz-k 100.00g sata-thinpool vm-300-disk-0
vm-100-disk-0 sata Vwi-aotz-- 100.00g sata-thinpool 57.49
vm-101-disk-0 sata Vwi-aotz-- 200.00g sata-thinpool 88.77
vm-102-disk-0 sata Vwi-aotz-- 50.00g sata-thinpool 5.63
vm-103-disk-0 sata Vwi-aotz-- 50.00g sata-thinpool 4.90
vm-200-disk-0 sata Vwi-aotz-- 150.00g sata-thinpool 51.13
vm-201-disk-0 sata Vwi-aotz-- 150.00g sata-thinpool 46.09
vm-300-disk-0 sata Vwi-aotz-- 100.00g sata-thinpool 48.81
vm-301-disk-0 sata Vwi-aotz-- 200.00g sata-thinpool 89.73

I'm running out of ideas...
If anyone has a idea/clue?


Many thanks!
Johann
 
Hello.
Got the same error after the latest upgrade.
root@pve:~# pveversion
pve-manager/6.1-7/13e58d5e (running kernel: 5.3.18-2-pve)

Reproduction steps:
root@pve:~# pct create 200 ceph-fs:vztmpl/centos-7-default_20190926_amd64.tar.xz -storage local-lvm
Logical volume "vm-200-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 1048576 4k blocks and 262144 inodes
Filesystem UUID: 337a08f9-ca5b-4a25-b2e2-d3d9abcd5a73
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/mnt/pve/ceph-fs/template/cache/centos-7-default_20190926_amd64.tar.xz'
Total bytes read: 422809600 (404MiB, 42MiB/s)
Detected container architecture: amd64
Creating SSH host key 'ssh_host_ecdsa_key' - this may take some time ...
done: SHA256:GaVbQWu1gw13qMfHsNwdtVedscAlB+77wOfTWQAJERo root@localhost
Creating SSH host key 'ssh_host_rsa_key' - this may take some time ...
done: SHA256:ecJttu4msT4hbCYAEZPpsSH8b/fxa3VKoKB6GBW6I1M root@localhost
Creating SSH host key 'ssh_host_ed25519_key' - this may take some time ...
done: SHA256:SE+aImMVmY1hS5LNBbfBbfBokKLQIdMFYLo4YiEPnJ4 root@localhost
Creating SSH host key 'ssh_host_dsa_key' - this may take some time ...
done: SHA256:vLWiUEv4VE9Os/sjt5QkunP9qkboEnJKAfDPsTRCzTI root@localhost
root@pve:~# vzdump 200
INFO: starting new backup job: vzdump 200
INFO: Starting Backup of VM 200 (lxc)
INFO: Backup started at 2020-03-16 09:43:43
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: CT200
Use of uninitialized value in string eq at /usr/share/perl5/PVE/VZDump/LXC.pm line 301.
INFO: creating archive '/var/lib/vz/dump/vzdump-lxc-200-2020_03_16-09_43_43.tar'
Use of uninitialized value in string eq at /usr/share/perl5/PVE/VZDump/LXC.pm line 354.
INFO: Total bytes written: 437381120 (418MiB, 19MiB/s)
INFO: archive file size: 417MB
INFO: Finished Backup of VM 200 (00:00:23)
INFO: Backup finished at 2020-03-16 09:44:06
INFO: Backup job finished successfully
root@pve:~# pct destroy 200
Logical volume "vm-200-disk-0" successfully removed
root@pve:~# pct restore 200 /var/lib/vz/dump/vzdump-lxc-200-2020_03_16-09_43_43.tar -storage local-lvm
Logical volume "vm-200-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 1048576 4k blocks and 262144 inodes
Filesystem UUID: 0c414848-4617-4055-b7cc-9d73c4013e88
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/var/lib/vz/dump/vzdump-lxc-200-2020_03_16-09_43_43.tar'
Total bytes read: 437381120 (418MiB, 39MiB/s)
Detected container architecture: amd64
Logical volume "vm-200-disk-0" successfully removed
unable to restore CT 200 - unable to parse volume ID '/var/lib/vz/dump/vzdump-lxc-200-2020_03_16-09_43_43.tar'
 
can confirm, fix is on the way.
 
fix is commited in git, as a workaround you should be able to specify the backup archive as proper volume ID STORAGEID:backup/BACKUP_ARCHIVE, e.g. /var/lib/vz/dump/vzdump-lxc-200-2020_03_16-09_43_43.tar would become local:backup/vzdump-lxc-200-2020_03_16-09_43_43.tar
 
Thanks a lot, the workaround works fine:
root@pve:~# pvesm list local
Volid Format Type Size VMID
local:backup/vzdump-lxc-200-2020_03_16-09_43_43.tar tar backup 437381120
root@pve:~# pct restore 200 local:backup/vzdump-lxc-200-2020_03_16-09_43_43.tar -storage local-lvm
Logical volume "vm-200-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 1048576 4k blocks and 262144 inodes
Filesystem UUID: 02b2b922-81ec-4c8f-a756-91c81b568f2d
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/var/lib/vz/dump/vzdump-lxc-200-2020_03_16-09_43_43.tar'
Total bytes read: 437381120 (418MiB, 48MiB/s)
Detected container architecture: amd64
root@pve:~# pct start 200
root@pve:~# lxc-ls -f --filter 200
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
200 RUNNING 0 - - - false
 
  • Like
Reactions: fabian
fix is commited in git, as a workaround you should be able to specify the backup archive as proper volume ID STORAGEID:backup/BACKUP_ARCHIVE, e.g. /var/lib/vz/dump/vzdump-lxc-200-2020_03_16-09_43_43.tar would become local:backup/vzdump-lxc-200-2020_03_16-09_43_43.tar

That's working, many thanks :D
 
fix is commited in git, as a workaround you should be able to specify the backup archive as proper volume ID STORAGEID:backup/BACKUP_ARCHIVE, e.g. /var/lib/vz/dump/vzdump-lxc-200-2020_03_16-09_43_43.tar would become local:backup/vzdump-lxc-200-2020_03_16-09_43_43.tar

Hi, unfortunately I have some issue following this example and I need a more detailed informations (sorry! ;-) )

The situation is that I backed up some LXC container from an experimental in-house server into the datacenter machine both with the same proxmox ve; on the production machine I use ZFS.

This is what I get simply restoring the container:

Bash:
root@vmhost01:~# pct restore 101 vzdump-lxc-101-2020_05_06-00_09_28.tar.lzo
Formatting '/var/lib/vz/images/101/vm-101-disk-0.raw', fmt=raw size=21474836480
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done                           
Creating filesystem with 5242880 4k blocks and 1310720 inodes
Filesystem UUID: d6dc3eea-60e1-4dde-b889-60b1154f3c1e
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (32768 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done   

extracting archive '/root/vzdump-lxc-101-2020_05_06-00_09_28.tar.lzo'
Total bytes read: 1669918720 (1.6GiB, 252MiB/s)
Detected container architecture: amd64
unable to restore CT 101 - unable to parse volume ID 'vzdump-lxc-101-2020_05_06-00_09_28.tar.lzo'
root@vmhost01:~#

As suggested, should be this:

Bash:
root@vmhost01:~# pct restore 101 local:vzdump-lxc-101-2020_05_06-00_09_28.tar.lzo
unable to parse directory volume name 'vzdump-lxc-101-2020_05_06-00_09_28.tar.lzo'

I'm missing something important... I'm sure! :)

The backups are in /root dir.

Many thanks, Francesco
 
Hello Francesco,

What is your Proxmox version?
Seems be patched in the latest version.

Bash:
root@vmhost01:~# pct restore 101 local:vzdump-lxc-101-2020_05_06-00_09_28.tar.lzo
unable to parse directory volume name 'vzdump-lxc-101-2020_05_06-00_09_28.tar.lzo'

I'm missing something important... I'm sure! :)

The backups are in /root dir.


I think the key is move your backup from /root to one of backup directory available on your storage.
For example, if you have storage named local mounted on /var/lib/vz, you must be move your backup file in /var/lib/vz/vzdump.
After, you can use local:backup/vzdump-lxc-101-2020_05_06-00_09_28.tar.lzo

I hope that's will work for you :)
 
What is your Proxmox version?
Seems be patched in the latest version.

I use ProxMox VE 6.1-7 that seems to be the last version.
Consider that I'm a free user so I don't have the right to access some repositories where the patch may be distributed...

By the way moving the file to the /var/lib/vz/dump doesn't solve the issue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!