error on backup - failed - job failed with err -61 - No data available

informant

Renowned Member
Jan 31, 2012
815
11
83
hi we have since view days following error on backup. what can we do here to fix it? all other backups run normal. regards

INFO: Starting Backup of VM 5134 (qemu)
INFO: Backup started at 2025-10-19 22:07:27
INFO: status = running
INFO: VM Name: noc.domain.de
INFO: include disk 'virtio0' 'local:5134/vm-5134-disk-0.qcow2' 250G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/5134/2025-10-19T20:07:27Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '60e457dd-61b0-473c-a101-ef2e284fe514'
INFO: resuming VM again
INFO: virtio0: dirty-bitmap status: OK (4.5 GiB of 250.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 4.5 GiB dirty of 250.0 GiB total
INFO: 7% (348.0 MiB of 4.5 GiB) in 3s, read: 116.0 MiB/s, write: 116.0 MiB/s
INFO: 13% (628.0 MiB of 4.5 GiB) in 6s, read: 93.3 MiB/s, write: 93.3 MiB/s
INFO: 19% (896.0 MiB of 4.5 GiB) in 9s, read: 89.3 MiB/s, write: 89.3 MiB/s
ERROR: job failed with err -61 - No data available
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 5134 failed - job failed with err -61 - No data available
INFO: Failed at 2025-10-19 22:07:37
 
Hi,

does the backup process fail always at the same percentage of 19% or is it different each time?
Would you like to post the logs of the PVE node and the PBS node at the time of the backup (e. g. by journalctl --since "2025-10-19 22:07:00" --until "2025-10-19 22:08:00")

Have you tried making a full-backup of this vm and if yes, does this work or fail too?
 
hi, yes everytime on 19%
log PBS:
Okt 19 22:07:37 backup-srv proxmox-backup-proxy[659]: TASK ERROR: removing backup snapshot "/mnt/backup/vm/5134/2025-10-19T20:07:27Z" failed - Directory not empty (os error 39)
log PVE node:
Okt 19 22:07:37 prometheus kernel: sd 2:2:0:0: [sdb] tag#712 BRCM Debug mfi stat 0x2d, data len requested/completed 0x40000/0x0
Okt 19 22:07:37 prometheus kernel: sd 2:2:0:0: [sdb] tag#712 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Okt 19 22:07:37 prometheus kernel: sd 2:2:0:0: [sdb] tag#712 Sense Key : Medium Error [current]
Okt 19 22:07:37 prometheus kernel: sd 2:2:0:0: [sdb] tag#712 Add. Sense: Unrecovered read error
Okt 19 22:07:37 prometheus kernel: sd 2:2:0:0: [sdb] tag#712 CDB: Read(16) 88 00 00 00 00 02 5b 8f 86 60 00 00 02 00 00 00
Okt 19 22:07:37 prometheus kernel: critical medium error, dev sdb, sector 10126067296 op 0x0:(READ) flags 0x4000 phys_seg 64 prio class 0
Okt 19 22:07:37 prometheus pvescheduler[2793596]: ERROR: Backup of VM 5134 failed - job failed with err -61 - No data available

Full Backup have smae error but little bit later in %
INFO: 22% (56.6 GiB of 250.0 GiB) in 2m 44s, read: 731.7 MiB/s, write: 135.4 MiB/s
ERROR: job failed with err -61 - No data available
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 5134 failed - job failed with err -61 - No data available

regards

PS: is this node storage hdd or vm hdd? => critical medium error, dev sdb ?
 
hi we have a 40 tb storage, all other vms run normal here. only this have the problem, how can we fix it @fiona? regards
ps: i have checlk with megacli all hdds, no smart error or other erors are listed on hdds in storage
 
Last edited:
/dev/sdb 29T 19T 9,4T 67% /var/lib/vz
i must hold on all vms, than i can start fsck, well is storage raid 60, hdds in storage are all ok, it must be also a filesystem error, hope fsck can solve it. i do make tonight.
 
/dev/sdb 29T 19T 9,4T 67% /var/lib/vz

So it appears that /dev/sdb is mounted on /var/lib/vz, which is the PVE default location for content iso,backup,vztmpl, not VM images.
If you were doing a vzdump backup to the local (PVE) storage, that may explain the error, but you appear to be using PBS.

Care to share your setup?
 
hi this node havae raid 1 as system hdd for proxmox, a lsi raid controller for 32tb storage over sas 12g. this raid 60 storage is mounted as vz dir in server for all vms. as backup we use PBS. regards
 
What if you try reading the file from start to finish, e.g. dd if=/var/lib/vz/images/5134/vm-5134-disk-0.qcow2 of=/dev/null bs=1M? If that also fails, I suggest using qemu-img convert with the --salvage option to copy to a different physical disk while the VM is shut down.
 
@fiona can you post complete command for qemu-img convert --salvage please and tell what do this? regards

dd if=/var/lib/vz/images/5134/vm-5134-disk-0.qcow2 of=/dev/null bs=1M
dd: Fehler beim Lesen von '/var/lib/vz/images/5134/vm-5134-disk-0.qcow2': Eingabe-/Ausgabefehler
57905+1 Datensätze ein
57905+1 Datensätze aus
60717826048 Byte (61 GB, 57 GiB) kopiert, 72,4797 s, 838 MB/s
 
Last edited:
@fiona can you post complete command for qemu-img convert --salvage please and tell what do this? regards
This needs to be done while the VM is shut down, because the image can only be read consistently if nothing else changes it! For example:
Code:
qemu-img convert -p -f qcow2 --salvage /var/lib/vz/images/5134/vm-5134-disk-0.qcow2 -O qcow2 /path/to/target.qcow2

Even if checker tools report that everything is okay, the kernel reports read errors for this sector, so I'd be very wary.
 
  • Like
Reactions: gfngfn256
i have make check raid system of storage (fsck -f -c -y /dev/sdb) after umount all /dev/sdb...
/dev/sdb: ***** DATEISYSTEM WURDE VERÄNDERT *****
/dev/sdb: 134/488308736 Dateien (0.7% nicht zusammenhängend), 4661011542/7812939776 Blöcke
it was repaired, i start system complete new and than i check again, if backup run, regards
 
Last edited:
Info, after fsck and reboot error are the same. i do test quemu-img convert yet in offline vm mode. same error kernel:critical medium error, dev sdb, sector
how can i found which hdd numbner in storage it is?
 
Last edited:
i do test quemu-img convert yet in offline vm mode.
The suggested command above by fiona:
qemu-img convert -p -f qcow2 --salvage /var/lib/vz/images/5134/vm-5134-disk-0.qcow2 -O qcow2 /path/to/target.qcow2
will create a new qcow2 image file at /path/to/target.qcow2, but still leaves the original /var/lib/vz/images/5134/vm-5134-disk-0.qcow2 intact.

So to be able to test - you would need to setup VM, (preferably create a new VM - so that you still have the original) that uses the newly created disk image of /path/to/target.qcow2 to test.

I still believe - that most likely the /dev/sdb disk is failing. To test correctly, you should power off the raid, & test the disk directly on a system. Also the raid controller should be extensively tested using different disks.

Good luck.
 
sorry i cant check all disc seperatly many vms run here, not available it.
i have create new qcow2 with command of @fiona and rename org to old and new to org name, than vm start. backup i test yet.
regards