I am moving my proxmox server to a bigger server. I was using LVM before but now I moved to a ZFS. I have been able to successfully restore all my clients and they work as expected. However, I am unable to restore my VM. I have tried the GUI as well as the command line
When issueing the following command :
it starts the progress and shows the read %. Once this reaches 100%, it just sits there and doesn't seem to be doing anything
The UI then shows this error:
my local-zfs is a RAID1 of 500GB HDDs and as such has plenty of space remaining. After restoring all my containers, only 10% of the space on local-zfs has been used up.
I have tried it 4-5 times now, but it always fails. How would I be able to restore this VM?
Thanks in advance...
When issueing the following command :
Bash:
qmrestore /mnt/pve/media/downloads/dump/vzdump-qemu-102-2022_02_02-13_12_55.vma.zst --storage local-zfs
The UI then shows this error:
Code:
Task viewer: VM 1001 - Restore
restore vma archive: zstd -q -d -c /mnt/pve/media/downloads/dump/vzdump-qemu-102-2022_02_02-13_12_55.vma.zst | vma extract -v -r /var/tmp/vzdumptmp1581047.fifo - /var/tmp/vzdumptmp1581047
CFG: size: 390 name: qemu-server.conf
DEV: dev_id=1 size: 34359738368 devname: drive-scsi0
CTIME: Wed Feb 2 13:12:56 2022
new volume ID is 'local-zfs:vm-1001-disk-0'
map 'drive-scsi0' to '/dev/zvol/rpool/data/vm-1001-disk-0' (write zeros = 0)
progress 1% (read 343605248 bytes, duration 2 sec)
progress 2% (read 687210496 bytes, duration 4 sec)
progress 3% (read 1030815744 bytes, duration 7 sec)
progress 4% (read 1374420992 bytes, duration 10 sec)
progress 5% (read 1718026240 bytes, duration 12 sec)
progress 6% (read 2061631488 bytes, duration 14 sec)
progress 7% (read 2405236736 bytes, duration 16 sec)
progress 8% (read 2748841984 bytes, duration 18 sec)
progress 9% (read 3092381696 bytes, duration 20 sec)
progress 10% (read 3435986944 bytes, duration 22 sec)
progress 11% (read 3779592192 bytes, duration 23 sec)
progress 12% (read 4123197440 bytes, duration 26 sec)
progress 13% (read 4466802688 bytes, duration 28 sec)
progress 14% (read 4810407936 bytes, duration 29 sec)
progress 15% (read 5154013184 bytes, duration 31 sec)
progress 16% (read 5497618432 bytes, duration 32 sec)
progress 17% (read 5841158144 bytes, duration 34 sec)
progress 18% (read 6184763392 bytes, duration 36 sec)
progress 19% (read 6528368640 bytes, duration 37 sec)
progress 20% (read 6871973888 bytes, duration 40 sec)
progress 21% (read 7215579136 bytes, duration 46 sec)
progress 22% (read 7559184384 bytes, duration 57 sec)
progress 23% (read 7902789632 bytes, duration 70 sec)
progress 24% (read 8246394880 bytes, duration 80 sec)
progress 25% (read 8589934592 bytes, duration 89 sec)
progress 26% (read 8933539840 bytes, duration 98 sec)
progress 27% (read 9277145088 bytes, duration 106 sec)
progress 28% (read 9620750336 bytes, duration 115 sec)
progress 29% (read 9964355584 bytes, duration 125 sec)
progress 30% (read 10307960832 bytes, duration 141 sec)
progress 31% (read 10651566080 bytes, duration 153 sec)
progress 32% (read 10995171328 bytes, duration 160 sec)
progress 33% (read 11338776576 bytes, duration 170 sec)
progress 34% (read 11682316288 bytes, duration 177 sec)
progress 35% (read 12025921536 bytes, duration 185 sec)
progress 36% (read 12369526784 bytes, duration 193 sec)
progress 37% (read 12713132032 bytes, duration 202 sec)
progress 38% (read 13056737280 bytes, duration 207 sec)
progress 39% (read 13400342528 bytes, duration 218 sec)
progress 40% (read 13743947776 bytes, duration 227 sec)
progress 41% (read 14087553024 bytes, duration 235 sec)
progress 42% (read 14431092736 bytes, duration 243 sec)
progress 43% (read 14774697984 bytes, duration 250 sec)
progress 44% (read 15118303232 bytes, duration 255 sec)
progress 45% (read 15461908480 bytes, duration 262 sec)
progress 46% (read 15805513728 bytes, duration 268 sec)
progress 47% (read 16149118976 bytes, duration 274 sec)
progress 48% (read 16492724224 bytes, duration 279 sec)
progress 49% (read 16836329472 bytes, duration 284 sec)
progress 50% (read 17179869184 bytes, duration 288 sec)
progress 51% (read 17523474432 bytes, duration 294 sec)
progress 52% (read 17867079680 bytes, duration 299 sec)
progress 53% (read 18210684928 bytes, duration 305 sec)
progress 54% (read 18554290176 bytes, duration 309 sec)
progress 55% (read 18897895424 bytes, duration 315 sec)
progress 56% (read 19241500672 bytes, duration 321 sec)
progress 57% (read 19585105920 bytes, duration 327 sec)
progress 58% (read 19928711168 bytes, duration 334 sec)
progress 59% (read 20272250880 bytes, duration 341 sec)
progress 60% (read 20615856128 bytes, duration 348 sec)
progress 61% (read 20959461376 bytes, duration 355 sec)
progress 62% (read 21303066624 bytes, duration 359 sec)
progress 63% (read 21646671872 bytes, duration 363 sec)
progress 64% (read 21990277120 bytes, duration 370 sec)
progress 65% (read 22333882368 bytes, duration 376 sec)
progress 66% (read 22677487616 bytes, duration 379 sec)
progress 67% (read 23021027328 bytes, duration 386 sec)
progress 68% (read 23364632576 bytes, duration 386 sec)
progress 69% (read 23708237824 bytes, duration 386 sec)
progress 70% (read 24051843072 bytes, duration 389 sec)
progress 71% (read 24395448320 bytes, duration 389 sec)
progress 72% (read 24739053568 bytes, duration 389 sec)
progress 73% (read 25082658816 bytes, duration 389 sec)
progress 74% (read 25426264064 bytes, duration 389 sec)
progress 75% (read 25769803776 bytes, duration 389 sec)
progress 76% (read 26113409024 bytes, duration 392 sec)
progress 77% (read 26457014272 bytes, duration 392 sec)
progress 78% (read 26800619520 bytes, duration 393 sec)
progress 79% (read 27144224768 bytes, duration 393 sec)
progress 80% (read 27487830016 bytes, duration 393 sec)
progress 81% (read 27831435264 bytes, duration 393 sec)
progress 82% (read 28175040512 bytes, duration 393 sec)
progress 83% (read 28518645760 bytes, duration 393 sec)
progress 84% (read 28862185472 bytes, duration 393 sec)
progress 85% (read 29205790720 bytes, duration 393 sec)
progress 86% (read 29549395968 bytes, duration 393 sec)
progress 87% (read 29893001216 bytes, duration 393 sec)
progress 88% (read 30236606464 bytes, duration 393 sec)
progress 89% (read 30580211712 bytes, duration 393 sec)
progress 90% (read 30923816960 bytes, duration 393 sec)
progress 91% (read 31267422208 bytes, duration 393 sec)
progress 92% (read 31610961920 bytes, duration 393 sec)
progress 93% (read 31954567168 bytes, duration 393 sec)
progress 94% (read 32298172416 bytes, duration 393 sec)
progress 95% (read 32641777664 bytes, duration 393 sec)
progress 96% (read 32985382912 bytes, duration 393 sec)
progress 97% (read 33328988160 bytes, duration 393 sec)
progress 98% (read 33672593408 bytes, duration 393 sec)
progress 99% (read 34016198656 bytes, duration 393 sec)
progress 100% (read 34359738368 bytes, duration 393 sec)
unable to cleanup 'local-zfs:vm-1001-disk-0' - zfs error: cannot destroy 'rpool/data/vm-1001-disk-0': dataset is busy
no lock found trying to remove 'create' lock
error before or during data restore, some or all disks were not completely restored. VM 1001 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/media/downloads/dump/vzdump-qemu-102-2022_02_02-13_12_55.vma.zst | vma extract -v -r /var/tmp/vzdumptmp1581047.fifo - /var/tmp/vzdumptmp1581047' failed: interrupted by signal
my local-zfs is a RAID1 of 500GB HDDs and as such has plenty of space remaining. After restoring all my containers, only 10% of the space on local-zfs has been used up.
I have tried it 4-5 times now, but it always fails. How would I be able to restore this VM?
Thanks in advance...
Last edited: