Restore stopped with unexpected statuts

Frigg

Active Member
Sep 4, 2016
63
2
28
52
Hello,

I'm trying to restore a VM to another storage.
I use these command line: qmrestore --storage local-sata2GO /mnt/nas2/dump/vzdump-qemu_140....vmz.gz 500

The problem is that process stop in the middle with only information "unexpected statuts".
How to get rid of these error that occured with the new version (5.1) and previons one (4.4) ?

NB As remove button was unusable, to delete what had been copied on my local storage, I had to create a vm with id 500 on the same storage, run qm rescan --vmid 500, and then delete the vm.

Thank you for your help,

2018-02-09_23h25_49.png
 
I would like to precise that I have tried to make a restoration to the same node from where the backup hase been made, and to another Datacenter and node. Both failed.
I have tried a restauration of the same backup to the same Datacenter and node from where the backup hase been made, using the GUI: NFS storage (where the backup is)>source my backup>restore to another storage and another vmid.
It works perfectly !

But I want to test the restoration on another (new) Datacenter and node in case the first one crashes.
As the GUI doesn't permit it, could you tell me what I did wrong in my command line ?

A contournament solution is to create a NFS storage pointing where the backup of the first datacenter and node are on the new datacenter and node, and to use the GUI to restore the backup on the new datacenter and node. But I would really be interested in understanding what I made wrong in my command line.
 
Last edited:
are there any errors in the syslog during the restore?
can you post your storage config (/etc/pve/storage.cfg) ?
also the output of "lvs" and "vgs"
 
Hello, Thank you for your answer.
You'll find below the syslog.
Restauration began at 12:39:09 and failed at 12:46:43.
It says that it had been interrupted by signal !!! But I have never asked for that !!!

C:\Users\hb\AppData\Local\Temp\msohtmlclip1\01\clip_image002.png

Feb 12 12:33:02 pve pvedaemon[1583]: <root@pam> successful auth for user 'root@pam'
Feb 12 12:39:09 pve qmrestore[15271]: <root@pam> starting task UPID:pve:00003BA8:00129F81:5A817CDD:qmrestore:500:root@pam:
Feb 12 12:39:38 pve pvestatd[1569]: status update time (23.784 seconds)
Feb 12 12:40:30 pve pvestatd[1569]: status update time (51.786 seconds)
Feb 12 12:41:12 pve systemd-timesyncd[895]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.008s/0.002s/+0ppm
Feb 12 12:41:20 pve pveproxy[10316]: worker exit
Feb 12 12:41:20 pve pveproxy[1593]: worker 10316 finished
Feb 12 12:41:20 pve pveproxy[1593]: starting 1 worker(s)
Feb 12 12:41:20 pve pveproxy[1593]: worker 15508 started
Feb 12 12:41:26 pve pvestatd[1569]: status update time (55.474 seconds)
Feb 12 12:42:23 pve pvestatd[1569]: status update time (57.553 seconds)
Feb 12 12:43:21 pve pvestatd[1569]: status update time (57.386 seconds)
Feb 12 12:43:41 pve pvedaemon[1581]: <root@pam> successful auth for user 'root@pam'
Feb 12 12:44:26 pve pvestatd[1569]: status update time (65.132 seconds)
Feb 12 12:44:26 pve pveproxy[14705]: worker exit
Feb 12 12:44:36 pve pvedaemon[1583]: <root@pam> end task UPID:pve:0000363D:0010D2BA:5A817841:vncshell::root@pam: OK
Feb 12 12:45:14 pve pvestatd[1569]: status update time (48.255 seconds)
Feb 12 12:45:54 pve smartd[1275]: Device: /dev/sdb [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 71 to 70
Feb 12 12:45:54 pve smartd[1275]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 29 to 30
Feb 12 12:45:56 pve postfix/qmgr[1550]: DC31C1C0EDB: from=<>, size=30553, nrcpt=1 (queue active)
Feb 12 12:46:16 pve postfix/smtp[15985]: DC31C1C0EDB: to=<root@pve.mtg.local>, relay=none, delay=165460, delays=165440/0.01/20/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=pve.mtg.local type=MX: Host not found, try again)
Feb 12 12:46:41 pve qmrestore[15272]: interrupted by signal
Feb 12 12:46:43 pve pvestatd[1569]: status update time (88.471 seconds)
Feb 12 12:47:02 pve pvestatd[1569]: status update time (19.039 seconds)
Feb 12 12:48:19 pve pvestatd[1569]: status update time (76.916 seconds)





You'll find attached the storage config, vgs and lvs output.
 

Attachments

  • storage.txt
    501 bytes · Views: 6
  • vgs.txt
    220 bytes · Views: 2
  • lvs.txt
    3.3 KB · Views: 3
It says that it had been interrupted by signal !!! But I have never asked for that !!!
this happens normally if the process gets killed by Ctrl-c or pressing the 'stop' button in the webinterface

but you seem to mix output from different hosts as your screenshot says something about a storage 'VM' ?
also, the whole restore tasklog would be helpful
 
Hello,
That is right, I mixed output from 2 machines.
Here are what you asked from the machine of my first messages.

Restauration begun at 11h43 and failed at 12h15 54sec.
upload_2018-2-15_17-52-43.png
I took care not to presse stop button or ctrl-c.
I just leave things as they were and was away when the command failed.

This time there is absolutely no message in the syslog, or I don't see it.
 

Attachments

  • vgs.txt
    80 bytes · Views: 3
  • storage.txt
    273 bytes · Views: 2
  • lvs.txt
    837 bytes · Views: 2
btw. you can double click on the task to open the task log, please do that and post the complete log
 
Hello, You'll find attached the task logs.
Logs displayed on screen for this task are at the end of the file.
It appears a warning message " WARNING: Sum of all thin volume sizes (1.56 TiB) exceeds the size of thin pool pve/data and the size of whole volume group (931.26 GiB)!"
But, VM present and to restore are thin provisioning and doesn't exceed volume group size.
I have made test without any other VM, and restoration failed the same way.
 

Attachments

  • Task logs.txt
    25.6 KB · Views: 8
this is not the complete task log, the last line says:
progress 10% (read 85899345920 bytes, duration 1684 sec)

which indicates the restore is still in progress
 
This is the comlete task log ! It is the way it stops.
weird, never seen it this way before, but without more information it is really impossible to say what happened
are there any logs from that point in time which are relevant?
 
:/
Have you tested a restauration where the backup is on a distant storage ? (localy, it works witout problem; with a distant storage mounted, it failed. The contournament is to use the GUI, create a NFS storage withe the distant storage, and then restore.)
Can you show me me the command line you executed ?
 
Have you tested a restauration where the backup is on a distant storage ?
not in this case, but you also restore to a local thin lvm ?

Can you show me me the command line you executed ?
like yours:

qmrestore --storage <mylvmthinstorage> <pathto vma.lzo> <vmid>
 
Have you tested a restauration where the backup is on a distant storage ?
oh i get it now, you should check your connection to the storage then and see if that is overloaded
 
What do you want me to do to check my connection ? I don't see any problem, or don't know how to do that.
The thing that is odd is that with the same connection, when using the GUI, restauration works, and not in command line.
 
Hi All,

I have same issue for our machine with the same error. Could you inform the continue issue update?

Regards,
 
Hello,
These issue was on the precedent Proxmox version. I haven't test it on the last one.
The problem is that restauration with command line may be bugged.
The contournament solution is to use the GUI. To be able to restore a Vm on another data center, you must copy backup files on a nfs storage mounted on destination datacenter.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!