qmrestore --storage problem

Mr.T.

Member
Jan 30, 2019
3
1
8
39
Hi,
I'm just trying out vzdump & qmrestore backups between different hosts. (backup host: proxmox 5.4, restore host: 6.0)
I've dumped a vm backup on one host, copied log & lzo files to a directory on the other host and when trying to issue restore command I get:
Code:
# qmrestore vzdump-qemu-110-2019_09_05-11_22_45.vma.lzo 110 --force true --storage local-lvm
restore vma archive: lzop -d -c /mnt/x/dump/vzdump-qemu-110-2019_09_05-11_22_45.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp22542.fifo - /var/tmp/vzdumptmp22542
CFG: size: 1945 name: qemu-server.conf
DEV: dev_id=1 size: 42949672960 devname: drive-scsi0
DEV: dev_id=2 size: 12884901888 devname: drive-scsi2
CTIME: Thu Sep  5 11:22:47 2019
command 'set -o pipefail && lzop -d -c /mnt/x/dump/vzdump-qemu-110-2019_09_05-11_22_45.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp22542.fifo - /var/tmp/vzdumptmp22542' failed: storage 'local-zfs' does not exists

local-zfs was local storage on the host where I made a backup, however I would think that --storage should overwrite it?
Code:
2019-09-05 11:22:45 INFO: Starting Backup of VM 110 (qemu)
2019-09-05 11:22:45 INFO: status = stopped
2019-09-05 11:22:46 INFO: update VM 110: -lock backup
2019-09-05 11:22:46 INFO: backup mode: stop
2019-09-05 11:22:46 INFO: ionice priority: 7
2019-09-05 11:22:46 INFO: VM Name: exporttest
2019-09-05 11:22:46 INFO: include disk 'scsi0' 'local-zfs:vm-110-disk-0' 40G
2019-09-05 11:22:46 INFO: include disk 'scsi2' 'local-zfs:vm-110-disk-2' 12G
2019-09-05 11:22:46 INFO: creating archive '/mnt/backup/dump/vzdump-qemu-110-2019_09_05-11_22_45.vma.lzo'
2019-09-05 11:22:46 INFO: starting kvm to execute backup task
2019-09-05 11:22:47 INFO: started backup task '04d4f40e-c994-4afc-ad11-c63a7d5d8e8d'
2019-09-05 11:22:50 INFO: status: 0% (498139136/55834574848), sparse 0% (186531840), duration 3, read/write 166/103 MB/s
2019-09-05 11:22:53 INFO: status: 1% (783024128/55834574848), sparse 0% (189505536), duration 6, read/write 94/93 MB/s
2019-09-05 11:22:56 INFO: status: 2% (1117519872/55834574848), sparse 0% (191987712), duration 9, read/write 111/110 MB/s
2019-09-05 11:23:01 INFO: status: 3% (1771503616/55834574848), sparse 0% (267063296), duration 14, read/write 130/115 MB/s
2019-09-05 11:23:04 INFO: status: 4% (2463825920/55834574848), sparse 1% (722841600), duration 17, read/write 230/78 MB/s
2019-09-05 11:23:07 INFO: status: 9% (5122359296/55834574848), sparse 5% (3266482176), duration 20, read/write 886/38 MB/s
2019-09-05 11:23:10 INFO: status: 14% (8353153024/55834574848), sparse 11% (6487035904), duration 23, read/write 1076/3 MB/s
2019-09-05 11:23:13 INFO: status: 21% (11783307264/55834574848), sparse 17% (9890979840), duration 26, read/write 1143/8 MB/s
2019-09-05 11:23:16 INFO: status: 27% (15469182976/55834574848), sparse 24% (13576847360), duration 29, read/write 1228/0 MB/s
2019-09-05 11:23:19 INFO: status: 34% (19069534208/55834574848), sparse 30% (17177169920), duration 32, read/write 1200/0 MB/s
2019-09-05 11:23:22 INFO: status: 39% (22006398976/55834574848), sparse 36% (20114026496), duration 35, read/write 978/0 MB/s
2019-09-05 11:23:25 INFO: status: 44% (24974327808/55834574848), sparse 41% (23081951232), duration 38, read/write 989/0 MB/s
2019-09-05 11:23:28 INFO: status: 50% (28218490880/55834574848), sparse 47% (26326106112), duration 41, read/write 1081/0 MB/s
2019-09-05 11:23:31 INFO: status: 56% (31396790272/55834574848), sparse 52% (29504401408), duration 44, read/write 1059/0 MB/s
2019-09-05 11:23:34 INFO: status: 63% (35415982080/55834574848), sparse 60% (33523560448), duration 47, read/write 1339/0 MB/s
2019-09-05 11:23:37 INFO: status: 70% (39304888320/55834574848), sparse 67% (37412458496), duration 50, read/write 1296/0 MB/s
2019-09-05 11:23:40 INFO: status: 75% (42128310272/55834574848), sparse 72% (40235872256), duration 53, read/write 941/0 MB/s
2019-09-05 11:23:43 INFO: status: 82% (45786791936/55834574848), sparse 78% (43889991680), duration 56, read/write 1219/1 MB/s
2019-09-05 11:23:46 INFO: status: 88% (49431969792/55834574848), sparse 85% (47535132672), duration 59, read/write 1215/0 MB/s
2019-09-05 11:23:49 INFO: status: 95% (53502541824/55834574848), sparse 92% (51605553152), duration 62, read/write 1356/0 MB/s
2019-09-05 11:23:52 INFO: status: 100% (55834574848/55834574848), sparse 96% (53937569792), duration 65, read/write 777/0 MB/s
2019-09-05 11:23:52 INFO: transferred 55834 MB in 65 seconds (858 MB/s)
2019-09-05 11:23:52 INFO: stopping kvm after backup task
2019-09-05 11:23:53 INFO: archive file size: 797MB
2019-09-05 11:23:54 INFO: Finished Backup of VM 110 (00:01:09)

Code:
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

dir: dir
        path /mnt/x
        content iso,backup,vztmpl,snippets,rootdir,images
        maxfiles 1
        shared 0
 
Last edited:
Please provide the config of the backup (GUI -> Backup file -> Show Configuration) and the storage config of the node where the backup was created.
In a quick test (PVE 6 to PVE 6 backup/restore) I coult not reproduce it even though the original storage was not available.
 
similar (enough to not start a new thread) problem here

Code:
# qmrestore /mnt/dump/vzdump-qemu-199-2019_08_15-19_41_55.vma.gz 199 --force true --storage lvmt_containers1-nvme1
restore vma archive: zcat /mnt/storage/dump/vzdump-qemu-199-2019_08_15-19_41_55.vma.gz | vma extract -v -r /var/tmp/vzdumptmp15652.fifo - /var/tmp/vzdumptmp15652
CFG: size: 1020 name: qemu-server.conf
DEV: dev_id=1 size: 2684354560 devname: drive-scsi0
CTIME: Thu Aug 15 19:41:56 2019
command 'set -o pipefail && zcat /mnt/storage/dump/vzdump-qemu-199-2019_08_15-19_41_55.vma.gz | vma extract -v -r /var/tmp/vzdumptmp15652.fifo - /var/tmp/vzdumptmp15652' failed: storage 'lvm_storage1-md1' does not exists

backup was originally create when the vm was in lvm_storage1-md1. here's the config

Code:
Virtual Environment 6.0-7
Search
Storage 'dir-zfs-storage-sdx4' on node 'node2'
Search:
Folder View
Logs
()
agent: 1
boot: cdn
bootdisk: scsi0
cipassword: xxxxxxxxxxxxxx
ciuser: ubuntu
cores: 2
ide0: lvm_storage1-md1:199/vm-199-cloudinit.qcow2,media=cdrom
ipconfig0: ip=10.10.10.199/24,gw=10.10.10.1
memory: 2048
name: base-vm
net0: virtio=xxxxxxxxxxxxxx,bridge=vmbr1
numa: 0
ostype: l26
scsi0: lvm_storage1-md1:199/vm-199-disk-0.qcow2,size=2560M
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=xxxxxxxxxxxxxx
sockets: 1
sshkeys: ssh-rsaxxxxxxxxxxxxxx
vmgenid: xxxxxxxxxxxxxx
#qmdump#map:scsi0:drive-scsi0:lvm_storage1-md1:qcow2:

I want to restore to either `dir-zfs-storage-sdx4` or `lvmt_containers1-nvme1.
 
This is because of the cloudinit disk. It should be fixed in qemu-server 6.0-8.
 
Please provide the config of the backup (GUI -> Backup file -> Show Configuration) and the storage config of the node where the backup was created.
In a quick test (PVE 6 to PVE 6 backup/restore) I coult not reproduce it even though the original storage was not available.

Apologies for not replying earlier. I can't provide it anymore since I wiped my old proxmox installation and to 'mitigate' the issue I just named my new storage local-zfs even though it's lvm.

But it could be related to cloud-init as my images were using it as well.
 
This is because of the cloudinit disk. It should be fixed in qemu-server 6.0-8.

We've upgraded to the latest release 0-12 but this still fails, even when setting the storage on the command line.

Code:
root@i:/mnt/pve/images/dump# qmrestore vzdump-qemu-162-2019_11_22-02_35_22.vma.gz  90003 --storage core-hdd
restore vma archive: zcat /mnt/pve/images/dump/vzdump-qemu-162-2019_11_22-02_35_22.vma.gz | vma extract -v -r /var/tmp/vzdumptmp29977.fifo - /var/tmp/vzdumptmp29977
CFG: size: 525 name: qemu-server.conf
CFG: size: 34 name: qemu-server.fw
DEV: dev_id=1 size: 2361393152 devname: drive-scsi0
CTIME: Fri Nov 22 02:35:24 2019
command 'set -o pipefail && zcat /mnt/pve/images/dump/vzdump-qemu-162-2019_11_22-02_35_22.vma.gz | vma extract -v -r /var/tmp/vzdumptmp29977.fifo - /var/tmp/vzdumptmp29977' failed: storage 'storage-1' does not exists
 
  • Like
Reactions: Mr.T.
Please post the output of pveversion -v as well as the VM config of the backup and the storage config (/etc/pve/storage.cfg)
 
Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.21-2-pve)
pve-manager: 6.0-12 (running version: 6.0-12/0a603350)
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.10.17-3-pve: 4.10.17-23
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.4.35-1-pve: 4.4.35-76
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-3
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-7
libpve-guest-common-perl: 3.0-2
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-10
pve-docs: 6.0-8
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-4
pve-ha-manager: 3.0-3
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-13
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
Code:
dir: local
    path /var/lib/vz
    content images,vztmpl,iso,rootdir
    maxfiles 0

iscsi: storage
    portal xxx
    target xx-xx-xx
    content images

lvm: core-hdd
    vgname core-hdd
    content images,rootdir
    shared 1

nfs: images
    export /store/templates
    path /mnt/pve/images
    server xxx
    content backup,vztmpl,iso
    maxfiles 0
    options vers=3

nfs: backups
    export /store/backups/core
    path /mnt/pve/backups
    server xxx
    content backup
    maxfiles 1
    options vers=3
Code:
agent: 1,fstrim_cloned_disks=1 boot: c bootdisk: scsi0 cores: 1 hotplug: disk,network,usb,memory,cpu ide2: storage-1:vm-162-cloudinit,media=cdrom memory: 1024 name: Ubuntu18LTS net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr0 numa: 1 onboot: 1 scsi0: storage-1:vm-162-disk-0,cache=writeback,size=2252M scsihw: virtio-scsi-pci serial0: socket smbios1: uuid=xxxx sockets: 1 tablet: 0 template: 1 vga: serial0 vmgenid: xxxx2c #qmdump#map:scsi0:drive-scsi0:storage-1:raw:
 
With qemu-server 6.1-2 I can't reproduce this here. If the storage is not available and I restore to a different one it works.
Please provide again the output of pveversion -v and the VM config in the backup. If the storage.cfg changed please provide it as well. And the restore log please.
 
As requested.

proxmox-ve: 6.1-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.10.17-3-pve: 4.10.17-23
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.4.35-1-pve: 4.4.35-76
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

The log and the storage are the same as before, as is the error itself.
 
Please reboot and then try again. If it still fails provide the complete restore log.
As mentioned previously, I can't reproduce it here with this version of qemu-server.
 
Hi
I've got the same error:


Bash:
qmrestore /var/lib/vz/dump/vzdump-qemu-20191017-2020_04_28-12_08_15.vma 20191017 --storage zfspool-1
restore vma archive: vma extract -v -r /var/tmp/vzdumptmp318.fifo /var/lib/vz/dump/vzdump-qemu-20191017-2020_04_28-12_08_15.vma /var/tmp/vzdumptmp318
CFG: size: 510 name: qemu-server.conf
DEV: dev_id=1 size: 10737418240 devname: drive-scsi0
CTIME: Tue Apr 28 12:08:17 2020
command 'set -o pipefail && vma extract -v -r /var/tmp/vzdumptmp318.fifo /var/lib/vz/dump/vzdump-qemu-20191017-2020_04_28-12_08_15.vma /var/tmp/vzdumptmp318' failed: storage 'local-lvm' does not exist

Proxmox is up-to-date:
Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
Please provide the backup config and your storage config (/etc/pve/storage.cfg).
 
Bash:
 cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup

zfspool: zfspool-1
    pool rpool/data
    sparse
    content images,rootdir

zfspool: zfspool-2
    pool nvme/data
    sparse
    content images,rootdir

Backup config:
Code:
agent: 1
boot: cn
bootdisk: scsi0
cores: 8
hotplug: disk,network,usb,memory,cpu
ide0: local-lvm:vm-20191017-cloudinit,media=cdrom
ide2: none,media=cdrom
memory: 1024
name: debian-10.1
net0: virtio=62:E4:8F:C5:7C:FE,bridge=vmbr1,tag=20
numa: 1
onboot: 0
ostype: l26
scsi0: local-lvm:base-20191017-disk-0,size=10G
scsihw: virtio-scsi-pci
smbios1: uuid=481487bc-25d2-443c-ac1e-8597db41c112
sockets: 1
template: 1
vcpus: 2
vmgenid: 6cfa55e7-8a60-41bc-82b0-a83c2840de89
#qmdump#map:scsi0:drive-scsi0:local-lvm:raw:
 
I've managed to import this backup. First, I extracted it to temporary directory, then I moved extracted qemu-server.conf to /etc/pve/qemu-server directory, removed all disks from config (cloudinit and scsi0), then I imported extracted disk-drive-scsi0.raw to VM and re-created cloudinit disk. It wasn't easy, but yes, I can :-))
 
I've managed to import this backup. First, I extracted it to temporary directory, then I moved extracted qemu-server.conf to /etc/pve/qemu-server directory, removed all disks from config (cloudinit and scsi0), then I imported extracted disk-drive-scsi0.raw to VM and re-created cloudinit disk. It wasn't easy, but yes, I can :))
can you. please give more details. as seams. you have the solution that all looking for. to solve problem. of. backup and restore. witch is not good option ....promx you should fix. this matter. this not joke.... a solution has no way to backup and restore easy. way. has no future. to be trusted by people.
 
hi. I extracted it to temporary directory, then I moved extracted qemu-server.conf to /etc/pve/qemu-server directory, this. i made it. but next. can. explain how.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!