Windows 11 backup fail to restore with errors

KingRichard

Member
Sep 8, 2021
8
0
6
55
Hi all,

I really desperates been 2 days trying to install my windows 11 with a proper backup.

1. making the windows 11 (done)
2. passthrough GPU (done)
3. Making backup (done) the backup being store on my NAS
4. For rainy day.. remove the complete Windows with backup (error after 99% with
Code:
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/Synology/dump/vzdump-qemu-200-2023_07_26-13_27_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp456564.fifo - /var/tmp/vzdumptmp456564' failed: exit code 133

Anyone have some help? This happen to all my previous windows backup and Linux backup did not have any impact.

Current Proxmox install on main NVME and Windows being install on 2nd NVME

Help?
 
Hi,
please make sure that you have enough space left on your target storage for the restore.
 
Hi,
the exit code is also interesting: exit code 133. Looking it up it is EHWPOISON 133 Memory page has hardware error. Was it the same for your other attempts? I'd try running a memory test, e.g. via the Proxmox installer ISO in the advanced options.

EDIT: @Chris told me this error can happen when there is not enough space apparently: https://forum.proxmox.com/threads/restore-error.16027/
 
Last edited:
Well to be honest this happen very recently. Previously all my Windows backups can be restored without any problem.

Since I recreate my proxmox machine to the latest version which is 8.0.5 all my Windows backups failed with a similar error. I know I'm making it 1TB in size. However, the NVME that I restored to it is 2TB in size. So, still have no clues why this keeps happening. Reinstall Windows 11 and make the backup, again similar error. I try to reformat my 2TB NVME without any luck. The error keeps on happening.

P.S. I switched my 1TB NVME with my 2TB NVME, so now it's in reverse. Win11 install and backup <- so far no problem restoring. Will try to do a passthrough GPU hope this will have no problem.
 
Last edited:
Please also share your storage and VM configuration, and the journal around the time of the failed restore.
Bash:
qm config <VMID>
cat /etc/pve/storage.cfg
journalctl --since <DATETIME> --until <DATETIME>

Can you exclude this to be an issue with your NVME drive?
 
Hi Chris,

This is what happens to my backup which I'm trying to restore into the other NVME. It ends up with the same result. Actually, all my previous Windows ended with the same result after 99%

Code:
restore vma archive: zstd -q -d -c /mnt/pve/Synology/dump/vzdump-qemu-200-2023_07_26-13_27_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp3254.fifo - /var/tmp/vzdumptmp3254
CFG: size: 877 name: qemu-server.conf
DEV: dev_id=1 size: 540672 devname: drive-efidisk0
DEV: dev_id=2 size: 429496729600 devname: drive-scsi0
DEV: dev_id=3 size: 4194304 devname: drive-tpmstate0-backup
CTIME: Wed Jul 26 13:27:46 2023
  Rounding up size to full physical extent 4.00 MiB
  Wiping ext4 signature on /dev/KLEVV/vm-300-disk-0.
  Logical volume "vm-300-disk-0" created.
new volume ID is 'KLEVV:vm-300-disk-0'
  Logical volume "vm-300-disk-1" created.
new volume ID is 'KLEVV:vm-300-disk-1'
  Logical volume "vm-300-disk-2" created.
new volume ID is 'KLEVV:vm-300-disk-2'
map 'drive-efidisk0' to '/dev/KLEVV/vm-300-disk-0' (write zeros = 1)
map 'drive-scsi0' to '/dev/KLEVV/vm-300-disk-1' (write zeros = 1)
map 'drive-tpmstate0-backup' to '/dev/KLEVV/vm-300-disk-2' (write zeros = 1)
progress 1% (read 4295032832 bytes, duration 19 sec)
progress 2% (read 8590065664 bytes, duration 36 sec)
progress 3% (read 12885098496 bytes, duration 57 sec)
progress 4% (read 17180065792 bytes, duration 82 sec)
progress 5% (read 21475098624 bytes, duration 90 sec)
progress 6% (read 25770131456 bytes, duration 95 sec)
progress 7% (read 30065164288 bytes, duration 101 sec)
progress 8% (read 34360131584 bytes, duration 124 sec)
progress 9% (read 38655164416 bytes, duration 147 sec)
progress 10% (read 42950197248 bytes, duration 164 sec)
progress 11% (read 47245164544 bytes, duration 197 sec)
progress 12% (read 51540197376 bytes, duration 231 sec)
progress 13% (read 55835230208 bytes, duration 269 sec)
progress 14% (read 60130263040 bytes, duration 304 sec)
progress 15% (read 64425230336 bytes, duration 321 sec)
progress 16% (read 68720263168 bytes, duration 351 sec)
progress 17% (read 73015296000 bytes, duration 391 sec)
progress 18% (read 77310263296 bytes, duration 415 sec)
progress 19% (read 81605296128 bytes, duration 434 sec)
progress 20% (read 85900328960 bytes, duration 458 sec)
progress 21% (read 90195361792 bytes, duration 493 sec)
progress 22% (read 94490329088 bytes, duration 528 sec)
progress 23% (read 98785361920 bytes, duration 565 sec)
progress 24% (read 103080394752 bytes, duration 604 sec)
progress 25% (read 107375362048 bytes, duration 641 sec)
progress 26% (read 111670394880 bytes, duration 678 sec)
progress 27% (read 115965427712 bytes, duration 715 sec)
progress 28% (read 120260460544 bytes, duration 753 sec)
progress 29% (read 124555427840 bytes, duration 790 sec)
progress 30% (read 128850460672 bytes, duration 829 sec)
progress 31% (read 133145493504 bytes, duration 867 sec)
progress 32% (read 137440526336 bytes, duration 904 sec)
progress 33% (read 141735493632 bytes, duration 935 sec)
progress 34% (read 146030526464 bytes, duration 959 sec)
progress 35% (read 150325559296 bytes, duration 978 sec)
progress 36% (read 154620526592 bytes, duration 1001 sec)
progress 37% (read 158915559424 bytes, duration 1006 sec)
progress 38% (read 163210592256 bytes, duration 1012 sec)
progress 39% (read 167505625088 bytes, duration 1018 sec)
progress 40% (read 171800592384 bytes, duration 1024 sec)
progress 41% (read 176095625216 bytes, duration 1029 sec)
progress 42% (read 180390658048 bytes, duration 1035 sec)
progress 43% (read 184685625344 bytes, duration 1041 sec)
progress 44% (read 188980658176 bytes, duration 1046 sec)
progress 45% (read 193275691008 bytes, duration 1052 sec)
progress 46% (read 197570723840 bytes, duration 1058 sec)
progress 47% (read 201865691136 bytes, duration 1063 sec)
progress 48% (read 206160723968 bytes, duration 1078 sec)
progress 49% (read 210455756800 bytes, duration 1101 sec)
progress 50% (read 214750724096 bytes, duration 1107 sec)
progress 51% (read 219045756928 bytes, duration 1113 sec)
progress 52% (read 223340789760 bytes, duration 1118 sec)
progress 53% (read 227635822592 bytes, duration 1124 sec)
progress 54% (read 231930789888 bytes, duration 1129 sec)
progress 55% (read 236225822720 bytes, duration 1135 sec)
progress 56% (read 240520855552 bytes, duration 1140 sec)
progress 57% (read 244815888384 bytes, duration 1146 sec)
progress 58% (read 249110855680 bytes, duration 1151 sec)
progress 59% (read 253405888512 bytes, duration 1157 sec)
progress 60% (read 257700921344 bytes, duration 1162 sec)
progress 61% (read 261995888640 bytes, duration 1168 sec)
progress 62% (read 266290921472 bytes, duration 1174 sec)
progress 63% (read 270585954304 bytes, duration 1179 sec)
progress 64% (read 274880987136 bytes, duration 1185 sec)
progress 65% (read 279175954432 bytes, duration 1191 sec)
progress 66% (read 283470987264 bytes, duration 1207 sec)
progress 67% (read 287766020096 bytes, duration 1212 sec)
progress 68% (read 292060987392 bytes, duration 1218 sec)
progress 69% (read 296356020224 bytes, duration 1223 sec)
progress 70% (read 300651053056 bytes, duration 1229 sec)
progress 71% (read 304946085888 bytes, duration 1234 sec)
progress 72% (read 309241053184 bytes, duration 1240 sec)
progress 73% (read 313536086016 bytes, duration 1245 sec)
progress 74% (read 317831118848 bytes, duration 1251 sec)
progress 75% (read 322126086144 bytes, duration 1257 sec)
progress 76% (read 326421118976 bytes, duration 1262 sec)
progress 77% (read 330716151808 bytes, duration 1268 sec)
progress 78% (read 335011184640 bytes, duration 1273 sec)
progress 79% (read 339306151936 bytes, duration 1279 sec)
progress 80% (read 343601184768 bytes, duration 1284 sec)
progress 81% (read 347896217600 bytes, duration 1290 sec)
progress 82% (read 352191250432 bytes, duration 1295 sec)
progress 83% (read 356486217728 bytes, duration 1301 sec)
progress 84% (read 360781250560 bytes, duration 1306 sec)
progress 85% (read 365076283392 bytes, duration 1312 sec)
progress 86% (read 369371250688 bytes, duration 1317 sec)
progress 87% (read 373666283520 bytes, duration 1323 sec)
progress 88% (read 377961316352 bytes, duration 1328 sec)
progress 89% (read 382256349184 bytes, duration 1334 sec)
progress 90% (read 386551316480 bytes, duration 1339 sec)
progress 91% (read 390846349312 bytes, duration 1345 sec)
progress 92% (read 395141382144 bytes, duration 1351 sec)
progress 93% (read 399436349440 bytes, duration 1356 sec)
progress 94% (read 403731382272 bytes, duration 1362 sec)
progress 95% (read 408026415104 bytes, duration 1367 sec)
progress 96% (read 412321447936 bytes, duration 1373 sec)
progress 97% (read 416616415232 bytes, duration 1378 sec)
progress 98% (read 420911448064 bytes, duration 1384 sec)
progress 99% (read 425206480896 bytes, duration 1389 sec)
_26-13_27_40.vma.zst : Decoding error (36) : Restored data doesn't match checksum
vma: restore failed - short vma extent (628224 < 684544)
/bin/bash: line 1:  3264 Exit 1                  zstd -q -d -c /mnt/pve/Synology/dump/vzdump-qemu-200-2023_07_26-13_27_40.vma.zst
      3265 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp3254.fifo - /var/tmp/vzdumptmp3254
  Logical volume "vm-300-disk-0" successfully removed.
temporary volume 'KLEVV:vm-300-disk-0' sucessfuly removed
  Logical volume "vm-300-disk-1" successfully removed.
temporary volume 'KLEVV:vm-300-disk-1' sucessfuly removed
  Logical volume "vm-300-disk-2" successfully removed.
temporary volume 'KLEVV:vm-300-disk-2' sucessfuly removed
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 300 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/Synology/dump/vzdump-qemu-200-2023_07_26-13_27_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp3254.fifo - /var/tmp/vzdumptmp3254' failed: exit code 133


Journalctl shows this error
Jul 27 08:30:18 main pvedaemon[1275]: <root@pam> starting task UPID:main:00000CB6:0000CAB2:64C1C8AA:qmrestore:300:root@pam: Jul 27 08:37:28 main systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jul 27 08:37:28 main systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jul 27 08:37:28 main systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jul 27 08:37:28 main systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jul 27 08:44:46 main pvedaemon[1277]: <root@pam> successful auth for user 'root@pam' Jul 27 08:53:39 main kernel: show_signal: 13 callbacks suppressed Jul 27 08:53:39 main kernel: traps: vma[3265] trap int3 ip:7f1ca615de92 sp:7ffed1cb7e50 error:0 in libglib-2.0.so.0.7400.6[7f1ca611f000+8d000] Jul 27 08:53:46 main kernel: dm-0: detected capacity change from 8192 to 0 Jul 27 08:53:46 main kernel: dm-1: detected capacity change from 838860800 to 0 Jul 27 08:53:46 main kernel: dm-12: detected capacity change from 8192 to 0 Jul 27 08:53:46 main kernel: dm-0: detected capacity change from 8192 to 0 Jul 27 08:53:47 main kernel: dm-0: detected capacity change from 838860800 to 0 Jul 27 08:53:47 main kernel: dm-0: detected capacity change from 8192 to 0 Jul 27 08:53:47 main pvedaemon[3254]: no lock found trying to remove 'create' lock Jul 27 08:53:47 main pvedaemon[3254]: error before or during data restore, some or all disks were not completely restored. VM 300 state is NOT cleaned up. Jul 27 08:53:47 main pvedaemon[3254]: command 'set -o pipefail && zstd -q -d -c /mnt/pve/Synology/dump/vzdump-qemu-200-2023_07_26-13_27_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp3254.fifo - /var/tmp/vzdumptmp3254' failed: exit code 133 Jul 27 08:53:47 main pvedaemon[1275]: <root@pam> end task UPID:main:00000CB6:0000CAB2:64C1C8AA:qmrestore:300:root@pam: command 'set -o pipefail && zstd -q -d -c /mnt/pve/Synology/dump/vzdump-qemu-200-2023_07_26-13_27_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp3254.fifo - /var/tmp/vzdumptmp3254' failed: exit code 133


CleanShot 2023-07-27 at 09.23.00.png
 
Code:
progress 99% (read 425206480896 bytes, duration 1389 sec)
_26-13_27_40.vma.zst : Decoding error (36) : Restored data doesn't match checksum
I think this error comes from zstd itself during decompression. Sounds like the compressed file is corrupted at the end. You can try decompressing the compressed vma file with zstd -d --no-check /path/to/archive.vma.zst and try to restore the vma file afterwards. Be aware that you need enough space on the storage where you decompress it.

EDIT: I'd also still run a memtest to make sure there's no issue there.
 
Last edited:
I think this error comes from zstd itself during decompression. Sounds like the compressed file is corrupted at the end. You can try decompressing the compressed vma file with zstd -d --no-check /path/to/archive.vma.zst and try to restore the vma file afterwards. Be aware that you need enough space on the storage where you decompress it.

EDIT: I'd also still run a memtest to make sure there's no issue there.
Again thanks for the input. If I wanna used manual restore is there a brief step-by-step on how to? I mean where should I restore the files and upload to where? how? Really would like to get to the bottom of this, since all my Linux backup does not get the same problem. Only for Windows backup in which I suspect has something to do with TPM?

I try to restore 1TB which might have a problem with storage, however, the latest one today is only 400GB in which my NVME is 2TB.
 
Last edited:
Again thanks for the input. If I wanna used manual restore is there a brief step-by-step on how to? I mean where should I restore the files and upload to where? how? Really would like to get to the bottom of this, since all my Linux backup does not get the same problem. Only for Windows backup in which I suspect has something to do with TPM?
Do you mean newly created backups also have this problem? You'd still restore the same way after decompressing, just selecting the vma file instead of the vma.zst file.
 
Do you mean newly created backups also have this problem? You'd still restore the same way after decompressing, just selecting the vma file instead of the vma.zst file.
Hi Fiona,

Manual restore when successful. Thank you soo much. Somehow the new build never run as good as the old build. So with this I have a working Windows. Again thanks
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!