Hello,
I have a customer that has proxmox 4.x that i inherited with 2 CTs on it and I need to migrate those 2 CTs to new proxmox 8.x environment. I am facing some problems I don't know how to get by.
I created backups of both CT machines on old proxmox and copied backups to new proxmox machine, then i went to backups gui on new proxmox and select backup file from list and then clicked on Restore button.
Now the test machine was restored successfully. It works from proxmox 8 just fine, but the production machine doesn't restore.
Restore process is working and working until it fills whole disk space and stops with error cannot write, no space left ... or something similar.
I've tried to restore from 3 different backups - stop mode/lzo, snapshot mode/raw, snapshot mode/tar.gz.
It's always the same error, no space left. It looks like it's looping and unpacking something, until it fills the disk and fails.
I tried to restore it from terminal with pct restore and extending disk size to 2TB, just to be sure. The restoring process was working until 2TB were filled, then it failed.
I've also tried to rsync that zfs volume to raw image file, no success, couldn't copy, it filled the disk, before rsyncing all the files.
That CT has 400gb disk with 335GB data on it. I created raw .img file of 1TB and rsynced that ZFS volume and it filled 1TB and failed?
Anyone have any suggestions, what do I need to do to restore the machine to new proxmox?
Thanks in advance.
EDIT!
I forgot to mention that i also tried to decompress that backup file. It doesn't finish. It decompress until it fails. I could see it used all 3TB of disk without finishing. And the backup is like 182GB, of machine that has 335GB data on 400GB disk. I couldn't even unpack files with tar.
This is the error text i got from restore process:
EDIT 2!
I've managed to restore machine yesterday evening, I just made really big root disk, like 2.9TB. The funny part is that this same CT has 335gb of 400gb disk used on old proxmox. On this new one it has 2.9TB root disk, and currently uses 1.7TB. Why such a difference I don't know. Does anyone have any clue?
I have a customer that has proxmox 4.x that i inherited with 2 CTs on it and I need to migrate those 2 CTs to new proxmox 8.x environment. I am facing some problems I don't know how to get by.
I created backups of both CT machines on old proxmox and copied backups to new proxmox machine, then i went to backups gui on new proxmox and select backup file from list and then clicked on Restore button.
Now the test machine was restored successfully. It works from proxmox 8 just fine, but the production machine doesn't restore.
Restore process is working and working until it fills whole disk space and stops with error cannot write, no space left ... or something similar.
I've tried to restore from 3 different backups - stop mode/lzo, snapshot mode/raw, snapshot mode/tar.gz.
It's always the same error, no space left. It looks like it's looping and unpacking something, until it fills the disk and fails.
I tried to restore it from terminal with pct restore and extending disk size to 2TB, just to be sure. The restoring process was working until 2TB were filled, then it failed.
I've also tried to rsync that zfs volume to raw image file, no success, couldn't copy, it filled the disk, before rsyncing all the files.
That CT has 400gb disk with 335GB data on it. I created raw .img file of 1TB and rsynced that ZFS volume and it filled 1TB and failed?
Anyone have any suggestions, what do I need to do to restore the machine to new proxmox?
Thanks in advance.
EDIT!
I forgot to mention that i also tried to decompress that backup file. It doesn't finish. It decompress until it fails. I could see it used all 3TB of disk without finishing. And the backup is like 182GB, of machine that has 335GB data on 400GB disk. I couldn't even unpack files with tar.
This is the error text i got from restore process:
Code:
recovering backed-up configuration from '/var/lib/vz/dump/vzdump-lxc-100-2024_02_03-00_00_01.tar.gz'
Formatting '/var/lib/vz/images/100/vm-100-disk-0.raw', fmt=raw size=1717986918400 preallocation=off
Creating filesystem with 419430400 4k blocks and 104857600 inodes
Filesystem UUID: 0212b063-d4e7-4a9b-a99f-3a302966c7cb
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
restoring '/var/lib/vz/dump/vzdump-lxc-100-2024_02_03-00_00_01.tar.gz' now..
extracting archive '/var/lib/vz/dump/vzdump-lxc-100-2024_02_03-00_00_01.tar.gz'
command 'umount -d /var/lib/lxc/100/rootfs/' failed: received interrupt
TASK ERROR: unable to restore CT 100 - command 'tar xpf - -z --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/100/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: received interrupt
EDIT 2!
I've managed to restore machine yesterday evening, I just made really big root disk, like 2.9TB. The funny part is that this same CT has 335gb of 400gb disk used on old proxmox. On this new one it has 2.9TB root disk, and currently uses 1.7TB. Why such a difference I don't know. Does anyone have any clue?
Last edited: