restore failed: OpenSSL error

garnoux

New Member
Apr 7, 2023
14
0
1
Hello,

Having this error when trying to restore VM from PBS on the same network.

`restore failed: OpenSSL error`

Here is the full log :

Code:
Using encryption key from file descriptor..
new volume ID is 'local-zfs:vm-4120-disk-0'
restore proxmox backup image: /usr/bin/pbs-restore --repository pbs-sync@pbs@10.5.5.21:PBS vm/4120/2024-03-24T20:01:50Z drive-scsi0.img.fidx /dev/zvol/rpool/data/vm-4120-disk-0 --verbose --format raw --keyfile /etc/pve/priv/storage/pbs-nas.enc --skip-zero
connecting to repository 'pbs-sync@pbs@10.5.5.21:PBS'
open block backend for target '/dev/zvol/rpool/data/vm-4120-disk-0'
starting to restore snapshot 'vm/4120/2024-03-24T20:01:50Z'
download and verify backup index
progress 1% (read 218103808 bytes, zeroes = 55% (121634816 bytes), duration 3 sec)
progress 2% (read 432013312 bytes, zeroes = 29% (125829120 bytes), duration 15 sec)
progress 3% (read 645922816 bytes, zeroes = 19% (125829120 bytes), duration 27 sec)
progress 4% (read 859832320 bytes, zeroes = 17% (150994944 bytes), duration 37 sec)
progress 5% (read 1073741824 bytes, zeroes = 14% (155189248 bytes), duration 49 sec)
progress 6% (read 1291845632 bytes, zeroes = 13% (171966464 bytes), duration 60 sec)
progress 7% (read 1505755136 bytes, zeroes = 23% (356515840 bytes), duration 62 sec)
progress 8% (read 1719664640 bytes, zeroes = 33% (570425344 bytes), duration 62 sec)
progress 9% (read 1933574144 bytes, zeroes = 39% (754974720 bytes), duration 62 sec)
progress 10% (read 2147483648 bytes, zeroes = 35% (754974720 bytes), duration 66 sec)
progress 11% (read 2365587456 bytes, zeroes = 31% (754974720 bytes), duration 71 sec)
progress 12% (read 2579496960 bytes, zeroes = 29% (754974720 bytes), duration 76 sec)
progress 13% (read 2793406464 bytes, zeroes = 27% (754974720 bytes), duration 83 sec)
progress 14% (read 3007315968 bytes, zeroes = 25% (754974720 bytes), duration 90 sec)
progress 15% (read 3221225472 bytes, zeroes = 23% (754974720 bytes), duration 100 sec)
progress 16% (read 3439329280 bytes, zeroes = 21% (754974720 bytes), duration 110 sec)
progress 17% (read 3653238784 bytes, zeroes = 20% (754974720 bytes), duration 116 sec)
progress 18% (read 3867148288 bytes, zeroes = 19% (754974720 bytes), duration 126 sec)
progress 19% (read 4081057792 bytes, zeroes = 18% (754974720 bytes), duration 132 sec)
progress 20% (read 4294967296 bytes, zeroes = 17% (754974720 bytes), duration 139 sec)
progress 21% (read 4513071104 bytes, zeroes = 16% (754974720 bytes), duration 145 sec)
progress 22% (read 4726980608 bytes, zeroes = 15% (754974720 bytes), duration 151 sec)
progress 23% (read 4940890112 bytes, zeroes = 15% (754974720 bytes), duration 160 sec)
progress 24% (read 5154799616 bytes, zeroes = 14% (754974720 bytes), duration 169 sec)
progress 25% (read 5368709120 bytes, zeroes = 14% (754974720 bytes), duration 177 sec)
progress 26% (read 5586812928 bytes, zeroes = 13% (754974720 bytes), duration 187 sec)
progress 27% (read 5800722432 bytes, zeroes = 13% (754974720 bytes), duration 195 sec)
progress 28% (read 6014631936 bytes, zeroes = 12% (754974720 bytes), duration 201 sec)
progress 29% (read 6228541440 bytes, zeroes = 12% (767557632 bytes), duration 207 sec)
progress 30% (read 6442450944 bytes, zeroes = 11% (771751936 bytes), duration 212 sec)
progress 31% (read 6660554752 bytes, zeroes = 11% (771751936 bytes), duration 218 sec)
progress 32% (read 6874464256 bytes, zeroes = 11% (771751936 bytes), duration 224 sec)
progress 33% (read 7088373760 bytes, zeroes = 10% (771751936 bytes), duration 229 sec)
progress 34% (read 7302283264 bytes, zeroes = 10% (771751936 bytes), duration 237 sec)
progress 35% (read 7516192768 bytes, zeroes = 10% (771751936 bytes), duration 244 sec)
progress 36% (read 7734296576 bytes, zeroes = 9% (771751936 bytes), duration 252 sec)
progress 37% (read 7948206080 bytes, zeroes = 9% (771751936 bytes), duration 260 sec)
progress 38% (read 8162115584 bytes, zeroes = 9% (771751936 bytes), duration 268 sec)
progress 39% (read 8376025088 bytes, zeroes = 9% (796917760 bytes), duration 279 sec)
restore failed: OpenSSL error
temporary volume 'local-zfs:vm-4120-disk-0' sucessfuly removed
error before or during data restore, some or all disks were not completely restored. VM 4120 state is NOT cleaned up.
TASK ERROR: command '/usr/bin/pbs-restore --repository pbs-sync@pbs@10.5.5.21:PBS vm/4120/2024-03-24T20:01:50Z drive-scsi0.img.fidx /dev/zvol/rpool/data/vm-4120-disk-0 --verbose --format raw --keyfile /etc/pve/priv/storage/pbs-nas.enc --skip-zero' failed: exit code 255

Nothing in syslog expect this

Code:
Mar 25 19:53:42 pve-r420 sudo[204085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=109)
Mar 25 19:53:42 pve-r420 sudo[204085]: pam_unix(sudo:session): session closed for user root
Mar 25 19:53:42 pve-r420 kernel: Alternate GPT is invalid, using primary GPT.
Mar 25 19:53:42 pve-r420 kernel:  zd224: p1 p2 p3
Mar 25 19:53:42 pve-r420 lvm[204416]: /dev/zd224p3 excluded: device is rejected by filter config.
Mar 25 19:53:43 pve-r420 sudo[204425]:   zabbix : PWD=/ ; USER=root ; COMMAND=/usr/sbin/zfs get -o value -Hp available data/vm-3247-disk-1
Mar 25 19:53:43 pve-r420 sudo[204425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=109)
Mar 25 19:53:43 pve-r420 sudo[204425]: pam_unix(sudo:session): session closed for user root
Mar 25 19:53:44 pve-r420 sudo[204525]:   zabbix : PWD=/ ; USER=root ; COMMAND=/usr/sbin/zfs get -o value -Hp available rpool/ROOT
Mar 25 19:53:44 pve-r420 sudo[204525]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=109)
Mar 25 19:53:44 pve-r420 sudo[204525]: pam_unix(sudo:session): session closed for user root
Mar 25 19:53:44 pve-r420 pvedaemon[130646]: error before or during data restore, some or all disks were not completely restored. VM 4120 state is NOT cleaned up.
Mar 25 19:53:44 pve-r420 pvedaemon[130646]: command '/usr/bin/pbs-restore --repository pbs-sync@pbs@10.5.5.21:PBS-AUTREVILLE vm/4120/2024-03-24T20:01:50Z drive-scsi0.img.fidx /dev/zvol/rpool/data/vm-4120-disk-0 --verbose --format raw --keyfile /etc/pve/priv/storage/pbs-nas.enc --skip-zero' failed: exit code 255
Mar 25 19:53:44 pve-r420 pvedaemon[4191591]: <root@pam> end task UPID:pve-r420:0001FE56:0E90D872:6601C713:qmrestore:4120:root@pam: command '/usr/bin/pbs-restore --repository pbs-sync@pbs@10.5.5.21:PBS-AUTREVILLE vm/4120/2024-03-24T20:01:50Z drive-scsi0.img.fidx /dev/zvol/rpool/data/vm-4120-disk-0 --verbose --format raw --keyfile /etc/pve/priv/storage/pbs-nas.enc --skip-zero' failed: exit code 255

I've tried to disable tso, gso and gro but nothing changes.
It always stop at the same point.


Any idea how to debug this

Thank's
 
Same error after same percentage on another PBS and another save (same VM)

Also set a bandwith limit on the firewall. It tooks 600 seconds approximately this time but it also fails at 39% (8376025088 bytes)
 
Last edited:
Hi,
was this snapshots verified on the Proxmox Backup Server side? Any errors in the PBS systemd journal during the restore? I suspect that this chunks got corrputed, therefore all snapshots still referencing it will fail to restore.
 
Hi,
All backups are verified in the same day.
1711440355536.png

No error logs on the server. And same error with a vm backup on another PBS.

I've reverified all the group 4120, no error.
1711441460508.png
 
Hi,
All backups are verified in the same day.
View attachment 65338

No error logs on the server. And same error with a vm backup on another PBS.

I've reverified all the group 4120, no error.
View attachment 65339
What is your pveversion -v and what is the proxmox-backup-manager version --verbose? When you say the same error with a VM backup on another PBS, do you mean for a snapshot synced to that server by a sync job or are these completely unrelated?
 
This is a sync job so effectively it is the same lol.
But re run a verification job says no error on both PBS. Even with the first backup (full) of the VM.

Code:
root@pbs-nas:~# proxmox-backup-manager version --verbose
proxmox-backup                     unknown      running kernel: 6.5.13-1-pve
proxmox-backup-server              3.1.4-1      running version: 3.1.4     
proxmox-kernel-helper              8.1.0                                   
proxmox-kernel-6.5.13-1-pve-signed 6.5.13-1                                 
proxmox-kernel-6.5                 6.5.13-3                                 
proxmox-kernel-6.5.11-8-pve-signed 6.5.11-8                                 
proxmox-kernel-6.5.11-4-pve-signed 6.5.11-4                                 
ifupdown2                          3.2.0-1+pmx8                             
libjs-extjs                        7.0.0-4                                 
proxmox-backup-docs                3.1.4-1                                 
proxmox-backup-client              3.1.4-1                                 
proxmox-mail-forward               0.2.3                                   
proxmox-mini-journalreader         1.4.0                                   
proxmox-offline-mirror-helper      0.6.5                                   
proxmox-widget-toolkit             4.1.4                                   
pve-xtermjs                        5.3.0-3                                 
smartmontools                      7.3-pve1                                 
zfsutils-linux                     2.2.3-pve1

Code:
root@pve-r420:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.5
pve-qemu-kvm: 8.1.2-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
 
Is this limited to the snapshots in this group or does this error happen also for other restore tasks? Might be a connection issue during the restore, can you exclude that?
 
It works with another group .. Seems the backup is corrupted. But in this case why verification job didn't notice it ?
 
Restore on another PVE fails too. Definitely an issue with this backup
Very strange behaviour
 
It works with another group .. Seems the backup is corrupted. But in this case why verification job didn't notice it ?
The server side can only verify the chunks digest as compared to the digest of its content, which seems to be okay for the chunks these snapshots have indexed. The error seems however to happen on the encryption layer.

Please try the following on the PBS host:
Code:
proxmox-backup-debug recover index /<datastore>/vm/<vmid>/<snapshot>/<drive>.fidx /<datastore>/.chunks --ignore-missing-chunks --ignore-corrupt-chunks --keyfile /<path-to-key>

This should restore the disk image to the folder you are located at, ignoring missing and corrupt chunks. If this works, you can try to see if you get more information when leaving out the `ignore` flags

Edit: Clarified that this needs to be executed on the PBS host.
 
Thank's I'm able to restore the corrupted disk. Can't understand why the error happened but I will look at it. Same OpenSSL error without the ignore flags (kernel panic if I try to start the VM)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!