Hi there, i have a problem with restoring a container from backup. The process starts but never finishs the extraction process. This is the output from the GUI:
Formatting '/mnt/pve/spectre_nfs/images/135/vm-135-disk-0.raw', fmt=raw size=34359738368
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 8388608 4k blocks and 2097152 inodes
Filesystem UUID: c958361b-dfb7-45e4-9ddb-651196a41daa
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: 0/256 done
Writing inode tables: 0/256 1/256 done
Creating journal (65536 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/256 done
extracting archive '/mnt/pve/spectre_bak_2/dump/vzdump-lxc-135-2020_05_12-20_19_50.tar.lzo'
Then i Stop the process from the GUI:
TASK ERROR: unable to restore CT 135 - command 'tar xpf - --lzop --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/135/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: received interrupt
The same happens when i move the tar file to local storage and try to restore it to local storage as well:
Using default stripesize 64.00 KiB.
Logical volume "vm-135-disk-0" created.
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: 4096/8388608 done
Creating filesystem with 8388608 4k blocks and 2097152 inodes
Filesystem UUID: e3163ca1-013d-4c13-b04d-34cd8e95f432
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: 0/256 done
Writing inode tables: 0/256 done
Creating journal (65536 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/256 done
extracting archive '/var/lib/vz/dump/vzdump-lxc-135-2020_05_12-20_19_50.tar.lzo'
The extracting process never goes on, i left it there for days.
Output of pveversions:
proxmox-ve: 5.4-2 (running kernel: 4.15.18-28-pve) pve-manager: 5.4-14 (running version: 5.4-14/b0e640f7) pve-kernel-4.15: 5.4-17 pve-kernel-4.15.18-28-pve: 4.15.18-56 pve-kernel-4.15.18-20-pve: 4.15.18-46 pve-kernel-4.15.18-11-pve: 4.15.18-34 pve-kernel-4.15.18-2-pve: 4.15.18-21 pve-kernel-4.15.17-1-pve: 4.15.17-9 pve-kernel-4.13.8-3-pve: 4.13.8-30 pve-kernel-4.13.4-1-pve: 4.13.4-26 pve-kernel-4.4.83-1-pve: 4.4.83-96 pve-kernel-4.4.67-1-pve: 4.4.67-92 pve-kernel-4.4.40-1-pve: 4.4.40-82 pve-kernel-4.4.35-2-pve: 4.4.35-79 pve-kernel-4.4.21-1-pve: 4.4.21-71 pve-kernel-4.4.19-1-pve: 4.4.19-66 corosync: 2.4.4-pve1 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2 libjs-extjs: 6.0.1-2 libpve-access-control: 5.1-12 libpve-apiclient-perl: 2.0-5 libpve-common-perl: 5.0-56 libpve-guest-common-perl: 2.0-20 libpve-http-server-perl: 2.0-14 libpve-storage-perl: 5.0-44 libqb0: 1.0.3-1~bpo9 lvm2: 2.02.168-pve6 lxc-pve: 3.1.0-7 lxcfs: 3.0.3-pve1 novnc-pve: 1.0.0-3 proxmox-widget-toolkit: 1.0-28 pve-cluster: 5.0-38 pve-container: 2.0-42 pve-docs: 5.4-2 pve-edk2-firmware: 1.20190312-1 pve-firewall: 3.0-22 pve-firmware: 2.0-7 pve-ha-manager: 2.0-9 pve-i18n: 1.1-4 pve-libspice-server1: 0.14.1-2 pve-qemu-kvm: 3.0.1-4 pve-xtermjs: 3.12.0-1 qemu-server: 5.0-56 smartmontools: 6.5+svn4324-1 spiceterm: 3.0-5 vncterm: 1.5-3 zfsutils-linux: 0.7.13-pve1~bpo2
I have enough space on my LVM storage:
root@pmxnode2:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 5 0 wz--n- 148.92g 15.82g
root@pmxnode2:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 87.93g 10.66 15.88
root pve -wi-ao---- 37.00g
swap pve -wi-ao---- 8.00g
vm-127-disk-1 pve Vwi-a-tz-- 35.00g data 24.61
vm-135-disk-0 pve Vwi-aotz-- 32.00g data 2.36
root@pmxnode2:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 148.92g 15.82g
Any ideas of whats going on? Thank you for your help.
Alex
Formatting '/mnt/pve/spectre_nfs/images/135/vm-135-disk-0.raw', fmt=raw size=34359738368
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 8388608 4k blocks and 2097152 inodes
Filesystem UUID: c958361b-dfb7-45e4-9ddb-651196a41daa
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: 0/256 done
Writing inode tables: 0/256 1/256 done
Creating journal (65536 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/256 done
extracting archive '/mnt/pve/spectre_bak_2/dump/vzdump-lxc-135-2020_05_12-20_19_50.tar.lzo'
Then i Stop the process from the GUI:
TASK ERROR: unable to restore CT 135 - command 'tar xpf - --lzop --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/135/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: received interrupt
The same happens when i move the tar file to local storage and try to restore it to local storage as well:
Using default stripesize 64.00 KiB.
Logical volume "vm-135-disk-0" created.
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: 4096/8388608 done
Creating filesystem with 8388608 4k blocks and 2097152 inodes
Filesystem UUID: e3163ca1-013d-4c13-b04d-34cd8e95f432
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: 0/256 done
Writing inode tables: 0/256 done
Creating journal (65536 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/256 done
extracting archive '/var/lib/vz/dump/vzdump-lxc-135-2020_05_12-20_19_50.tar.lzo'
The extracting process never goes on, i left it there for days.
Output of pveversions:
proxmox-ve: 5.4-2 (running kernel: 4.15.18-28-pve) pve-manager: 5.4-14 (running version: 5.4-14/b0e640f7) pve-kernel-4.15: 5.4-17 pve-kernel-4.15.18-28-pve: 4.15.18-56 pve-kernel-4.15.18-20-pve: 4.15.18-46 pve-kernel-4.15.18-11-pve: 4.15.18-34 pve-kernel-4.15.18-2-pve: 4.15.18-21 pve-kernel-4.15.17-1-pve: 4.15.17-9 pve-kernel-4.13.8-3-pve: 4.13.8-30 pve-kernel-4.13.4-1-pve: 4.13.4-26 pve-kernel-4.4.83-1-pve: 4.4.83-96 pve-kernel-4.4.67-1-pve: 4.4.67-92 pve-kernel-4.4.40-1-pve: 4.4.40-82 pve-kernel-4.4.35-2-pve: 4.4.35-79 pve-kernel-4.4.21-1-pve: 4.4.21-71 pve-kernel-4.4.19-1-pve: 4.4.19-66 corosync: 2.4.4-pve1 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2 libjs-extjs: 6.0.1-2 libpve-access-control: 5.1-12 libpve-apiclient-perl: 2.0-5 libpve-common-perl: 5.0-56 libpve-guest-common-perl: 2.0-20 libpve-http-server-perl: 2.0-14 libpve-storage-perl: 5.0-44 libqb0: 1.0.3-1~bpo9 lvm2: 2.02.168-pve6 lxc-pve: 3.1.0-7 lxcfs: 3.0.3-pve1 novnc-pve: 1.0.0-3 proxmox-widget-toolkit: 1.0-28 pve-cluster: 5.0-38 pve-container: 2.0-42 pve-docs: 5.4-2 pve-edk2-firmware: 1.20190312-1 pve-firewall: 3.0-22 pve-firmware: 2.0-7 pve-ha-manager: 2.0-9 pve-i18n: 1.1-4 pve-libspice-server1: 0.14.1-2 pve-qemu-kvm: 3.0.1-4 pve-xtermjs: 3.12.0-1 qemu-server: 5.0-56 smartmontools: 6.5+svn4324-1 spiceterm: 3.0-5 vncterm: 1.5-3 zfsutils-linux: 0.7.13-pve1~bpo2
I have enough space on my LVM storage:
root@pmxnode2:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 5 0 wz--n- 148.92g 15.82g
root@pmxnode2:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 87.93g 10.66 15.88
root pve -wi-ao---- 37.00g
swap pve -wi-ao---- 8.00g
vm-127-disk-1 pve Vwi-a-tz-- 35.00g data 24.61
vm-135-disk-0 pve Vwi-aotz-- 32.00g data 2.36
root@pmxnode2:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 148.92g 15.82g
Any ideas of whats going on? Thank you for your help.
Alex