Hello,
I have a problem with a backup of newly created VM. I read 2 topics here but they refer to different issues;
https://forum.proxmox.com/threads/backup-fail.51329/
https://forum.proxmox.com/threads/error-job-failed-with-err-61-no-data-available.48449/
I have Proxmox with some VM working well and being backuped on a few local and remote NFS storages. They've been working ok for a long time.
I decided to run a new ~350-400GB VM for a new system. Everything went fine but when I tried to make a first backup once evertything was set ready - I faced error -61. I tried to build VM a few times with slightly different size (350-400GB). Everytime I try to backup them - I stop at the same moment. For 350GB - at 18%, For 400GB - at 16% of total backup. It does not matter - when I try to direct backup - error happens both on local and NFS backup discs. It looks like below:
INFO: creating archive '/backup_disc1/dump/vzdump-qemu-230-2019_12_17-01_56_12.vma.lzo'
INFO: started backup task '7890402f-a024-4276-91ef-85e270e9e648'
INFO: status: 0% (409010176/375809638400), sparse 0% (163667968), duration 3, read/write 136/81 MB/s
(....)
INFO: status: 18% (70019252224/375809638400), sparse 0% (3062001664), duration 974, read/write 62/57 MB/s
ERROR: job failed with err -61 - No data available
INFO: aborting backup job
ERROR: Backup of VM 230 failed - job failed with err -61 - No data available
INFO: Failed at 2019-12-17 02:12:40
INFO: Backup job finished with errors
TASK ERROR: job errors
Same when I try to backup on another backup local disc:
INFO: creating archive '/backup_disc2/dump/vzdump-qemu-230-2019_12_17-17_30_36.vma.lzo'
INFO: started backup task 'e26a2b9e-63df-4318-89eb-b93d1e9c6e77'
INFO: status: 0% (91095040/375809638400), sparse 0% (6877184), duration 3, read/write 30/28 MB/s
(...)
INFO: status: 18% (70019186688/375809638400), sparse 0% (3062022144), duration 1000, read/write 83/77 MB/s
ERROR: job failed with err -61 - No data available
System LVM for VM is located on local RAID5 - 4 x SAS 600GB. Up to ssacli status - discs look ok. I even rebuilt it a few days ago. Problem was not solved.
logicaldrive 1 (1.64 TB, RAID 5, OK)
physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS HDD, 600 GB, OK)
physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS HDD, 600 GB, OK)
physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS HDD, 600 GB, OK)
physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS HDD, 600 GB, OK)
I would be happy if anyone can explain what "err -61 - No data available" mean? I found no explanation for it.
In my situation I have working new Virtual Machines but I cannot make first backup. So no - I can't restore VM from previous backup what is suggested in one of conversations pointed above. I suppose - there might be a strange problem with RAID but I have no idea - how to trace and fix it when discs look ok and all other VM including new one work fine. But it may be also a kind of soft limitation?
I would be happy to hear any tips before I will eventually exchange whole local SB40C module with a backup one (it has also bigger discs and better RAID controller).
Rgrds,
Peter
I have a problem with a backup of newly created VM. I read 2 topics here but they refer to different issues;
https://forum.proxmox.com/threads/backup-fail.51329/
https://forum.proxmox.com/threads/error-job-failed-with-err-61-no-data-available.48449/
I have Proxmox with some VM working well and being backuped on a few local and remote NFS storages. They've been working ok for a long time.
I decided to run a new ~350-400GB VM for a new system. Everything went fine but when I tried to make a first backup once evertything was set ready - I faced error -61. I tried to build VM a few times with slightly different size (350-400GB). Everytime I try to backup them - I stop at the same moment. For 350GB - at 18%, For 400GB - at 16% of total backup. It does not matter - when I try to direct backup - error happens both on local and NFS backup discs. It looks like below:
INFO: creating archive '/backup_disc1/dump/vzdump-qemu-230-2019_12_17-01_56_12.vma.lzo'
INFO: started backup task '7890402f-a024-4276-91ef-85e270e9e648'
INFO: status: 0% (409010176/375809638400), sparse 0% (163667968), duration 3, read/write 136/81 MB/s
(....)
INFO: status: 18% (70019252224/375809638400), sparse 0% (3062001664), duration 974, read/write 62/57 MB/s
ERROR: job failed with err -61 - No data available
INFO: aborting backup job
ERROR: Backup of VM 230 failed - job failed with err -61 - No data available
INFO: Failed at 2019-12-17 02:12:40
INFO: Backup job finished with errors
TASK ERROR: job errors
Same when I try to backup on another backup local disc:
INFO: creating archive '/backup_disc2/dump/vzdump-qemu-230-2019_12_17-17_30_36.vma.lzo'
INFO: started backup task 'e26a2b9e-63df-4318-89eb-b93d1e9c6e77'
INFO: status: 0% (91095040/375809638400), sparse 0% (6877184), duration 3, read/write 30/28 MB/s
(...)
INFO: status: 18% (70019186688/375809638400), sparse 0% (3062022144), duration 1000, read/write 83/77 MB/s
ERROR: job failed with err -61 - No data available
System LVM for VM is located on local RAID5 - 4 x SAS 600GB. Up to ssacli status - discs look ok. I even rebuilt it a few days ago. Problem was not solved.
logicaldrive 1 (1.64 TB, RAID 5, OK)
physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS HDD, 600 GB, OK)
physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS HDD, 600 GB, OK)
physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS HDD, 600 GB, OK)
physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS HDD, 600 GB, OK)
I would be happy if anyone can explain what "err -61 - No data available" mean? I found no explanation for it.
In my situation I have working new Virtual Machines but I cannot make first backup. So no - I can't restore VM from previous backup what is suggested in one of conversations pointed above. I suppose - there might be a strange problem with RAID but I have no idea - how to trace and fix it when discs look ok and all other VM including new one work fine. But it may be also a kind of soft limitation?
I would be happy to hear any tips before I will eventually exchange whole local SB40C module with a backup one (it has also bigger discs and better RAID controller).
Rgrds,
Peter