Backup job failed with err -61 - new VM!

Zboj

New Member
Dec 17, 2019
6
0
1
44
Hello,

I have a problem with a backup of newly created VM. I read 2 topics here but they refer to different issues;

https://forum.proxmox.com/threads/backup-fail.51329/
https://forum.proxmox.com/threads/error-job-failed-with-err-61-no-data-available.48449/

I have Proxmox with some VM working well and being backuped on a few local and remote NFS storages. They've been working ok for a long time.

I decided to run a new ~350-400GB VM for a new system. Everything went fine but when I tried to make a first backup once evertything was set ready - I faced error -61. I tried to build VM a few times with slightly different size (350-400GB). Everytime I try to backup them - I stop at the same moment. For 350GB - at 18%, For 400GB - at 16% of total backup. It does not matter - when I try to direct backup - error happens both on local and NFS backup discs. It looks like below:

INFO: creating archive '/backup_disc1/dump/vzdump-qemu-230-2019_12_17-01_56_12.vma.lzo'
INFO: started backup task '7890402f-a024-4276-91ef-85e270e9e648'
INFO: status: 0% (409010176/375809638400), sparse 0% (163667968), duration 3, read/write 136/81 MB/s
(....)
INFO: status: 18% (70019252224/375809638400), sparse 0% (3062001664), duration 974, read/write 62/57 MB/s
ERROR: job failed with err -61 - No data available
INFO: aborting backup job
ERROR: Backup of VM 230 failed - job failed with err -61 - No data available
INFO: Failed at 2019-12-17 02:12:40
INFO: Backup job finished with errors
TASK ERROR: job errors

Same when I try to backup on another backup local disc:

INFO: creating archive '/backup_disc2/dump/vzdump-qemu-230-2019_12_17-17_30_36.vma.lzo'
INFO: started backup task 'e26a2b9e-63df-4318-89eb-b93d1e9c6e77'
INFO: status: 0% (91095040/375809638400), sparse 0% (6877184), duration 3, read/write 30/28 MB/s
(...)
INFO: status: 18% (70019186688/375809638400), sparse 0% (3062022144), duration 1000, read/write 83/77 MB/s
ERROR: job failed with err -61 - No data available

System LVM for VM is located on local RAID5 - 4 x SAS 600GB. Up to ssacli status - discs look ok. I even rebuilt it a few days ago. Problem was not solved.

logicaldrive 1 (1.64 TB, RAID 5, OK)

physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS HDD, 600 GB, OK)
physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS HDD, 600 GB, OK)
physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS HDD, 600 GB, OK)
physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS HDD, 600 GB, OK)

I would be happy if anyone can explain what "err -61 - No data available" mean? I found no explanation for it.

In my situation I have working new Virtual Machines but I cannot make first backup. So no - I can't restore VM from previous backup what is suggested in one of conversations pointed above. I suppose - there might be a strange problem with RAID but I have no idea - how to trace and fix it when discs look ok and all other VM including new one work fine. But it may be also a kind of soft limitation?

I would be happy to hear any tips before I will eventually exchange whole local SB40C module with a backup one (it has also bigger discs and better RAID controller).

Rgrds,

Peter
 
Does your backup storage have enough free space?

Are there any pending VM config changes? I have seen that cause issues with backups as well, but that was on older versions.
 
Sure there is plenty of space both on backup discs and on LVM.

Please note error info.

"ERROR: job failed with err -61 - No data available". It says there is no data for backup - like it would be black hole in VM volume. Nowhere it says about lack of free space.
 
Sure there is plenty of space both on backup discs and on LVM.

Please note error info.

"ERROR: job failed with err -61 - No data available". It says there is no data for backup - like it would be black hole in VM volume. Nowhere it says about lack of free space.

Just checking as its a very common issue, its worth reviewing the basics.

Are there any pending VM config changes?
 
As I said - it was the only new VM. So no pending VM config changes. As you could see - it stops at 18% = at the same moment regardless target.

So:
1. It is first backup ever of a new VM. So I cannot restore from backup.
2. It is not a problem with starting backup but breaking it after 18% always.

I tried to find options to safe examine mounted LVM but found no useful options.

I have a guess that it might be an effect of extending one VM partition some weeks ago. Maybe it hit LVM somehow? When I shut VM and fsck discussed new /dev/dm-13 - I found errors. But fixing them did not solve problem.

It is first time I face such a problem. So I try to understand it.

Anyway - for production and Just not to waste more time - I've rebuilt whole serwer - new system, storange and LVM discs. Fresh Proxmox 6.1 install (which btw moved me to Debian 10 at background as previous was not dist-upgraded one). And now I am restoring images and will build new machine again. Should work as usual.

Old RAID5 LVM is however connected to one of my backup machines so I may play with it if anyone has helpfuf tips.
 
Hello,
I'm facing the same issue, but with an old VM that was backuped with no issue until yesterday.
How did you get ride of it ?
Tnanks for help
hervé
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!