Restoring Issue vm

scorpoin

New Member
Apr 11, 2022
13
0
1
I've created backup one of my VM on Node1 total size is around 97gb . When I move it to new server same backup file size shows 103 gb and once I start the process of restoration I encounter following errors.

Code:
estore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-107-2022_04_14-06_39_41.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp27296.fifo - /var/tmp/vzdumptmp27296
CFG: size: 402 name: qemu-server.conf
DEV: dev_id=1 size: 322122547200 devname: drive-ide0
CTIME: Thu Apr 14 06:39:43 2022
Formatting '/var/lib/vz/images/103/vm-103-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=322122547200 lazy_refcounts=off refcount_bits=16
new volume ID is 'local:103/vm-103-disk-0.qcow2'
map 'drive-ide0' to '/var/lib/vz/images/103/vm-103-disk-0.qcow2' (write zeros = 0)
progress 1% (read 3221225472 bytes, duration 7 sec)
progress 2% (read 6442450944 bytes, duration 14 sec)
progress 3% (read 9663676416 bytes, duration 21 sec)
progress 4% (read 12884901888 bytes, duration 29 sec)
progress 5% (read 16106127360 bytes, duration 37 sec)
progress 6% (read 19327352832 bytes, duration 46 sec)
progress 7% (read 22548578304 bytes, duration 56 sec)
progress 8% (read 25769803776 bytes, duration 63 sec)
progress 9% (read 28991029248 bytes, duration 71 sec)
progress 10% (read 32212254720 bytes, duration 80 sec)
progress 11% (read 35433480192 bytes, duration 89 sec)
progress 12% (read 38654705664 bytes, duration 97 sec)
progress 13% (read 41875931136 bytes, duration 105 sec)
progress 14% (read 45097156608 bytes, duration 113 sec)
progress 15% (read 48318382080 bytes, duration 118 sec)
progress 16% (read 51539607552 bytes, duration 125 sec)
progress 17% (read 54760833024 bytes, duration 133 sec)
progress 18% (read 57982058496 bytes, duration 141 sec)
progress 19% (read 61203283968 bytes, duration 150 sec)
progress 20% (read 64424509440 bytes, duration 158 sec)
progress 21% (read 67645734912 bytes, duration 167 sec)
progress 22% (read 70866960384 bytes, duration 174 sec)
progress 23% (read 74088185856 bytes, duration 183 sec)
progress 24% (read 77309411328 bytes, duration 191 sec)
progress 25% (read 80530636800 bytes, duration 199 sec)
progress 26% (read 83751862272 bytes, duration 208 sec)
progress 27% (read 86973087744 bytes, duration 218 sec)
progress 28% (read 90194313216 bytes, duration 226 sec)
progress 29% (read 93415538688 bytes, duration 235 sec)
progress 30% (read 96636764160 bytes, duration 243 sec)
progress 31% (read 99857989632 bytes, duration 252 sec)
progress 32% (read 103079215104 bytes, duration 260 sec)
progress 33% (read 106300440576 bytes, duration 269 sec)
progress 34% (read 109521666048 bytes, duration 277 sec)
progress 35% (read 112742891520 bytes, duration 283 sec)
progress 36% (read 115964116992 bytes, duration 283 sec)
progress 37% (read 119185342464 bytes, duration 288 sec)
progress 38% (read 122406567936 bytes, duration 295 sec)
progress 39% (read 125627793408 bytes, duration 301 sec)
progress 40% (read 128849018880 bytes, duration 307 sec)
progress 41% (read 132070244352 bytes, duration 315 sec)
progress 42% (read 135291469824 bytes, duration 318 sec)
progress 43% (read 138512695296 bytes, duration 326 sec)
progress 44% (read 141733920768 bytes, duration 334 sec)
progress 45% (read 144955146240 bytes, duration 343 sec)
progress 46% (read 148176371712 bytes, duration 352 sec)
progress 47% (read 151397597184 bytes, duration 360 sec)
progress 48% (read 154618822656 bytes, duration 367 sec)
progress 49% (read 157840048128 bytes, duration 375 sec)
progress 50% (read 161061273600 bytes, duration 383 sec)
progress 51% (read 164282499072 bytes, duration 390 sec)
progress 52% (read 167503724544 bytes, duration 398 sec)
progress 53% (read 170724950016 bytes, duration 405 sec)
progress 54% (read 173946175488 bytes, duration 414 sec)
progress 55% (read 177167400960 bytes, duration 424 sec)
progress 56% (read 180388626432 bytes, duration 431 sec)
progress 57% (read 183609851904 bytes, duration 440 sec)
progress 58% (read 186831077376 bytes, duration 448 sec)
progress 59% (read 190052302848 bytes, duration 457 sec)
vma: restore failed - blk_pwrite to  failed (-28)
/bin/bash: line 1: 27298 Broken pipe             lzop -d -c /var/lib/vz/dump/vzdump-qemu-107-2022_04_14-06_39_41.vma.lzo
     27299 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp27296.fifo - /var/tmp/vzdumptmp27296
temporary volume 'local:103/vm-103-disk-0.qcow2' sucessfuly removed
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 103 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && lzop -d -c /var/lib/vz/dump/vzdump-qemu-107-2022_04_14-06_39_41.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp27296.fifo - /var/tmp/vzdumptmp27296' failed: exit code 133

I've removed other machine backups as well. Any idea what could be wrong.
 
Hi,
I've created backup one of my VM on Node1 total size is around 97gb . When I move it to new server same backup file size shows 103 gb and once I start the process of restoration I encounter following errors.

Code:
estore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-107-2022_04_14-06_39_41.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp27296.fifo - /var/tmp/vzdumptmp27296
CFG: size: 402 name: qemu-server.conf
DEV: dev_id=1 size: 322122547200 devname: drive-ide0
CTIME: Thu Apr 14 06:39:43 2022
Formatting '/var/lib/vz/images/103/vm-103-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=322122547200 lazy_refcounts=off refcount_bits=16
new volume ID is 'local:103/vm-103-disk-0.qcow2'
map 'drive-ide0' to '/var/lib/vz/images/103/vm-103-disk-0.qcow2' (write zeros = 0)
progress 1% (read 3221225472 bytes, duration 7 sec)
progress 2% (read 6442450944 bytes, duration 14 sec)
progress 3% (read 9663676416 bytes, duration 21 sec)
progress 4% (read 12884901888 bytes, duration 29 sec)
progress 5% (read 16106127360 bytes, duration 37 sec)
progress 6% (read 19327352832 bytes, duration 46 sec)
progress 7% (read 22548578304 bytes, duration 56 sec)
progress 8% (read 25769803776 bytes, duration 63 sec)
progress 9% (read 28991029248 bytes, duration 71 sec)
progress 10% (read 32212254720 bytes, duration 80 sec)
progress 11% (read 35433480192 bytes, duration 89 sec)
progress 12% (read 38654705664 bytes, duration 97 sec)
progress 13% (read 41875931136 bytes, duration 105 sec)
progress 14% (read 45097156608 bytes, duration 113 sec)
progress 15% (read 48318382080 bytes, duration 118 sec)
progress 16% (read 51539607552 bytes, duration 125 sec)
progress 17% (read 54760833024 bytes, duration 133 sec)
progress 18% (read 57982058496 bytes, duration 141 sec)
progress 19% (read 61203283968 bytes, duration 150 sec)
progress 20% (read 64424509440 bytes, duration 158 sec)
progress 21% (read 67645734912 bytes, duration 167 sec)
progress 22% (read 70866960384 bytes, duration 174 sec)
progress 23% (read 74088185856 bytes, duration 183 sec)
progress 24% (read 77309411328 bytes, duration 191 sec)
progress 25% (read 80530636800 bytes, duration 199 sec)
progress 26% (read 83751862272 bytes, duration 208 sec)
progress 27% (read 86973087744 bytes, duration 218 sec)
progress 28% (read 90194313216 bytes, duration 226 sec)
progress 29% (read 93415538688 bytes, duration 235 sec)
progress 30% (read 96636764160 bytes, duration 243 sec)
progress 31% (read 99857989632 bytes, duration 252 sec)
progress 32% (read 103079215104 bytes, duration 260 sec)
progress 33% (read 106300440576 bytes, duration 269 sec)
progress 34% (read 109521666048 bytes, duration 277 sec)
progress 35% (read 112742891520 bytes, duration 283 sec)
progress 36% (read 115964116992 bytes, duration 283 sec)
progress 37% (read 119185342464 bytes, duration 288 sec)
progress 38% (read 122406567936 bytes, duration 295 sec)
progress 39% (read 125627793408 bytes, duration 301 sec)
progress 40% (read 128849018880 bytes, duration 307 sec)
progress 41% (read 132070244352 bytes, duration 315 sec)
progress 42% (read 135291469824 bytes, duration 318 sec)
progress 43% (read 138512695296 bytes, duration 326 sec)
progress 44% (read 141733920768 bytes, duration 334 sec)
progress 45% (read 144955146240 bytes, duration 343 sec)
progress 46% (read 148176371712 bytes, duration 352 sec)
progress 47% (read 151397597184 bytes, duration 360 sec)
progress 48% (read 154618822656 bytes, duration 367 sec)
progress 49% (read 157840048128 bytes, duration 375 sec)
progress 50% (read 161061273600 bytes, duration 383 sec)
progress 51% (read 164282499072 bytes, duration 390 sec)
progress 52% (read 167503724544 bytes, duration 398 sec)
progress 53% (read 170724950016 bytes, duration 405 sec)
progress 54% (read 173946175488 bytes, duration 414 sec)
progress 55% (read 177167400960 bytes, duration 424 sec)
progress 56% (read 180388626432 bytes, duration 431 sec)
progress 57% (read 183609851904 bytes, duration 440 sec)
progress 58% (read 186831077376 bytes, duration 448 sec)
progress 59% (read 190052302848 bytes, duration 457 sec)
vma: restore failed - blk_pwrite to  failed (-28)
Error -28 usually means "no space left on device". Do you have enough space on your local storage, i.e. /var/lib/vz/?

Code:
/bin/bash: line 1: 27298 Broken pipe             lzop -d -c /var/lib/vz/dump/vzdump-qemu-107-2022_04_14-06_39_41.vma.lzo
     27299 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp27296.fifo - /var/tmp/vzdumptmp27296
temporary volume 'local:103/vm-103-disk-0.qcow2' sucessfuly removed
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 103 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && lzop -d -c /var/lib/vz/dump/vzdump-qemu-107-2022_04_14-06_39_41.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp27296.fifo - /var/tmp/vzdumptmp27296' failed: exit code 133

I've removed other machine backups as well. Any idea what could be wrong.
 
Hi,

Error -28 usually means "no space left on device". Do you have enough space on your local storage, i.e. /var/lib/vz/?
here is the out put of df -h

Code:
Filesystem           Size  Used Avail Use% Mounted on
udev                  16G     0   16G   0% /dev
tmpfs                3.2G  153M  3.0G   5% /run
/dev/md2              20G  3.3G   16G  18% /
tmpfs                 16G   63M   16G   1% /dev/shm
tmpfs                5.0M     0  5.0M   0% /run/lock
tmpfs                 16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/vg-data  449G  222G  204G  53% /var/lib/vz
/dev/nvme1n1p1       511M  160K  511M   1% /boot/efi
/dev/fuse             30M   24K   30M   1% /etc/pve
tmpfs                3.2G     0  3.2G   0% /run/user/0

it has still 200 gb space lest size of compressed file is 103 mgiht it is more then 222gb after compress?
 
Hi,

Error -28 usually means "no space left on device". Do you have enough space on your local storage, i.e. /var/lib/vz/?
@fabian Thanks for your promp response. I would like to know what If I attached an other 1 TB to my machine , can you guide me how do I use it with promox . I'm bit confuse sine /var/lib/vz is already mounted. How do I utilize that extra space for my vmz.

Regards
 
You can either manually prepare the drive and add it to the storage configuration or use the web UI to do the same: select Datacenter > [Your Node] > Disks and then one of Directory/LVM/ZFS > Create, selecting the empty disk. Here is an overview of different storage types and their features.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!