Hello,
I currently moving CT between two servers with Proxmox in 6.1 version.
Since three day, after last atp update & apt upgrade, I can't restore any CT on my new server.
Each time, that's ended on this error : " unable to restore CT XXX - unable to parse volume ID 'vzdump-lxc-XXX-XXXX.tar.gz'"
I have try make GZ or LZO backup on source server without any difference.
I have reboot my new server this night and the error continue.
By curiosity, I have try to restore one backup create by the new server this night, with a CT already migrate.
... and the restore failed too, with the same error
I have try change of CT number or change Thinpool LVM destination, with the same error.
Each help about this error seems be linked to ZFS, but I not use ZFS on my both servers.
I use LVM on SATA RAID or NVME RAID (soft raid with madm).
LVM :
I'm running out of ideas...
If anyone has a idea/clue?
Many thanks!
Johann
I currently moving CT between two servers with Proxmox in 6.1 version.
Since three day, after last atp update & apt upgrade, I can't restore any CT on my new server.
Each time, that's ended on this error : " unable to restore CT XXX - unable to parse volume ID 'vzdump-lxc-XXX-XXXX.tar.gz'"
root@yugo:/louise# pct restore 106 vzdump-lxc-203-2020_03_11-04_46_38.tar.gz -storage sata-thinpool -unprivileged 0 -rootfs 100 -bwlimit 800000
Logical volume "vm-106-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 26214400 4k blocks and 6553600 inodes
Filesystem UUID: 81674420-9969-45c1-ae65-796034e768b8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done
extracting archive '/louise/vzdump-lxc-203-2020_03_11-04_46_38.tar.gz'
Total bytes read: 1347225600 (1.3GiB, 143MiB/s)
Detected container architecture: amd64
Logical volume "vm-106-disk-0" successfully removed
unable to restore CT 106 - unable to parse volume ID 'vzdump-lxc-203-2020_03_11-04_46_38.tar.gz'
I have try make GZ or LZO backup on source server without any difference.
I have reboot my new server this night and the error continue.
By curiosity, I have try to restore one backup create by the new server this night, with a CT already migrate.
... and the restore failed too, with the same error
I have try change of CT number or change Thinpool LVM destination, with the same error.
root@yugo:/backup/dump# pct restore 107 vzdump-lxc-103-2020_03_14-05_31_06.tar.gz -storage sata-thinpool -unprivileged 0 -rootfs 100 -bwlimit 800000
Logical volume "vm-107-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 26214400 4k blocks and 6553600 inodes
Filesystem UUID: d8f52a12-232e-4560-b158-4cb331b2857a
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done
extracting archive '/backup/dump/vzdump-lxc-103-2020_03_14-05_31_06.tar.gz'
Total bytes read: 1090027520 (1.1GiB, 153MiB/s)
Detected container architecture: amd64
Logical volume "vm-107-disk-0" successfully removed
unable to restore CT 107 - unable to parse volume ID 'vzdump-lxc-103-2020_03_14-05_31_06.tar.gz'
Each help about this error seems be linked to ZFS, but I not use ZFS on my both servers.
I use LVM on SATA RAID or NVME RAID (soft raid with madm).
root@yugo:/backup/dump# pveversion --verbose
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-5
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-21
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
pve-zsync: 2.0-2
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
LVM :
root@yugo:/backup/dump# pvs
PV VG Fmt Attr PSize PFree
/dev/md127 sata lvm2 a-- <3.64t 6.77g
/dev/md4 nvme lvm2 a-- <399.68g 3.70g
root@yugo:/backup/dump# vgs
VG #PV #LV #SN Attr VSize VFree
nvme 1 2 0 wz--n- <399.68g 3.70g
sata 1 12 0 wz--n- <3.64t 6.77g
root@yugo:/backup/dump# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
nvme-thinpool nvme twi-aotz-- 395.00g 36.17 15.18
temp nvme Vwi-aotz-- 150.00g nvme-thinpool 95.25
backup sata Vwi-aotz-- 1.00t sata-thinpool 95.30
sata-data sata Vwi-aotz-- 20.00g sata-thinpool 2.23
sata-thinpool sata twi-aotz-- 3.63t 42.83 19.46
snap_vm-300-disk-0_before_upgrade sata Vri---tz-k 100.00g sata-thinpool vm-300-disk-0
vm-100-disk-0 sata Vwi-aotz-- 100.00g sata-thinpool 57.49
vm-101-disk-0 sata Vwi-aotz-- 200.00g sata-thinpool 88.77
vm-102-disk-0 sata Vwi-aotz-- 50.00g sata-thinpool 5.63
vm-103-disk-0 sata Vwi-aotz-- 50.00g sata-thinpool 4.90
vm-200-disk-0 sata Vwi-aotz-- 150.00g sata-thinpool 51.13
vm-201-disk-0 sata Vwi-aotz-- 150.00g sata-thinpool 46.09
vm-300-disk-0 sata Vwi-aotz-- 100.00g sata-thinpool 48.81
vm-301-disk-0 sata Vwi-aotz-- 200.00g sata-thinpool 89.73
I'm running out of ideas...
If anyone has a idea/clue?
Many thanks!
Johann