No Space Left on Device when restoring from tar.lzo

MikeC

Renowned Member
Jan 11, 2016
72
0
71
Bay Area, California
Hello. I'm in the process of migrating all my VMs and containers from version 3.3.5 and 4.x of proxmox to version 5.4.
On several occasions, I have run into errors on restore:

tar: ./usr/lib/pymodules/python2.7/ndg/httpsclient/test/scripts/openssl_https_server.sh: Cannot create symlink to '/usr/share/pyshared/ndg/httpsclient/test/scripts/openssl_https_server.sh': No space left on device
tar: ./usr/lib/pymodules/python2.7/ndg/httpsclient/ssl_peer_verification.py: Cannot create symlink to '/usr/share/pyshared/ndg/httpsclient/ssl_peer_verification.py': No space left on device
Total bytes read: 44550809600 (42GiB, 133MiB/s)
tar: Exiting with failure status due to previous errors
Logical volume "vm-105-disk-0" successfully removed
unable to restore CT 105 - command 'tar xpf - --lzop --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/105/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2


There were a lot more lines with similar errors about disk size before pct bailed. I just copied the last few.

On the 3.3.5 instance, proxmox reports the disk size as 32G. Copying over the image, it was reported as smaller:
vzdump-openvz-107-2019_12_25-20_18_16.tar.lzo 100% 8461MB 33.7MB/s 04:11

I'm restoring using:
pct restore 105 vzdump-openvz-107-2019_12_25-20_18_16.tar.lzo -storage local-lvm

The local-lvm storage device has 1.8TB of 2TB free, so I know it's not that.

Is there some discrepancy in how proxmox is creating the new container? Do I need to actually tell it how big it is according to the source proxmox node?
 
Please try to use the --rootfs parameter to specify the container size in GB, for example

# pct restore --rootfs 32 <VMID> <ARCHIVE>
 
Thanks, Dietmar. I'm still getting the same error:

root@proxmox3:/var/lib/vz/dump# pct restore --rootfs 32 105 vzdump-openvz-billingdev-prox3.tar.lzo -storage local-lvm
Using default stripesize 64.00 KiB.
Logical volume "vm-105-disk-0" created.
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
Creating filesystem with 8388608 4k blocks and 2097152 inodes
Filesystem UUID: 6d82a6b8-6f41-4b09-8cc6-26eb8f4e6f79
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/var/lib/vz/dump/vzdump-openvz-billingdev-prox3.tar.lzo'
tar: ./home/gadget/dump.sql: Cannot write: No space left on device
 
I couldnt say, I was simply pointing out that your archive extracts to 44GB/42GiB. I dont know where you saw the 32GB figure so I cant comment.

edit- IF the original FS was ZFS you may have read the post compression disk utilization.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!