Correcting a broken restore

  • Thread starter Thread starter BillW
  • Start date Start date
B

BillW

Guest
Hello folks. Thanks for your input.

Importing/Restoring a configuration from another proxmox instance.

The import is of a VZDump via the GUI of Proxmox 2.2-24 cluster node to a NFS share named "data".
Restore completes without errors and a file called /mnt/pve/data/images/112/vm-112-disk-1.qcow2 is created.
In the configuration of the virtual image, the hard disk (sata0) is listed as lili:100/vm-100-disk-1.qcow2,size=40G.

So a few problems:
1. The disk location wasn't overwritten from lili to data, thus making the disk inaccessible, breaking the ability to import a VZDump.
2. The configuration utilities don't like editing disk references.
3. The configuration utilities don't like destroying broken configurations until you delete the offending "missing" disk. (a bug? but easy to work around)
4. There isn't an obvious way to add a disk from an existing file.


Process:
1) Move files to be imported to /mnt/pve/dump
2) Restore

The output of the GUI restore command:
extracting archive '/mnt/pve/data/dump/vzdump-qemu-100-2012_11_22-14_28_44.tar.lzo'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-sata0.qcow2' from archive
Formatting '/mnt/pve/data/images/112/vm-112-disk-1.qcow2', fmt=qcow2 size=32768 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
new volume ID is 'data:112/vm-112-disk-1.qcow2'
restore data to '/mnt/pve/data/images/112/vm-112-disk-1.qcow2' (3900375040 bytes)
1+3959650 records in
14878+1 records out
3900375040 bytes (3.9 GB) copied, 170.048 s, 22.9 MB/s
TASK OK

The output of a QM CONFIG 112:
root@c004:/mnt/pve/data/images/112# qm config 112
bootdisk: sata0
cores: 1
memory: 1024
name: systest
net0: rtl8139=E2:69:83:AF:14:BD,bridge=vmbr0
ostype: l26
sata0: lili:100/vm-100-disk-1.qcow2,size=40G
sockets: 1

root@c004:/mnt/pve/data/images/112# ls
vm-112-disk-1.qcow2

Also tried command line restore:
root@c004:/mnt/pve/data/dump# qmrestore /mnt/pve/data/dump/*100*.lzo 112 -force -storage data -pool CSGLab -unique
extracting archive '/mnt/pve/data/dump/vzdump-qemu-100-2012_11_22-14_28_44.tar.lzo'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-sata0.qcow2' from archive
Formatting '/mnt/pve/data/images/112/vm-112-disk-1.qcow2', fmt=qcow2 size=32768 encryption=off cluster_size=65536 preallocation='metad
ata' lazy_refcounts=off
new volume ID is 'data:112/vm-112-disk-1.qcow2'
restore data to '/mnt/pve/data/images/112/vm-112-disk-1.qcow2' (3900375040 bytes)
5+3589701 records in
14878+1 records out
3900375040 bytes (3.9 GB) copied, 211.397 s, 18.5 MB/s


As a side note, I also imported a second image from the same source that did not experience any issues.


 
Seems you are using an old system. Please post the output from

# pverservion -v

and update your system before you run the restore job.
 
Seems you are using an old system. Please post the output from

# pverservion -v

and update your system before you run the restore job.

As stated in my post, I'm running 2.2-24.
Here is the output of pveversion -v

root@c004:~# pveversion -v
pve-manager: 2.2-24 (pve-manager/2.2/7f9cfa4c)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-80
pve-kernel-2.6.32-16-pve: 2.6.32-80
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-1
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-28
qemu-server: 2.0-62
pve-firmware: 1.0-21
libpve-common-perl: 1.0-36
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-34
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1

I then ran a apt-get upgrade -y and reran the pveversion command.

root@c004:~# pveversion -v
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-82
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-33
qemu-server: 2.0-69
pve-firmware: 1.0-21
libpve-common-perl: 1.0-39
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1

The delta is:
pve-manager 2.2-24 > 2.2-31
redhat-cluster-pve 3.1.93-1 > 3.1.93-2
pve-cluster 1.0-28 > 1.0-33
qemu-server 2.0-62 > 2.0-69
libpve-common-perl 1.0-36 > 1.0-39
libpve-storage-perl 2.0-34 > 2.0-36

Running the restore from the GUI produced the following output:
extracting archive '/mnt/pve/data/dump/vzdump-qemu-100-2012_11_22-14_28_44.tar.lzo'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-sata0.qcow2' from archive
Formatting '/mnt/pve/data/images/112/vm-112-disk-1.qcow2', fmt=qcow2 size=32768 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
new volume ID is 'data:112/vm-112-disk-1.qcow2'
restore data to '/mnt/pve/data/images/112/vm-112-disk-1.qcow2' (3900375040 bytes)
14+3865056 records in
14878+1 records out
3900375040 bytes (3.9 GB) copied, 171.841 s, 22.7 MB/s
TASK OK

and a quick check of the imported config is:
root@c004:~# qm config 112
bootdisk: sata0
cores: 1
memory: 1024
name: systest
net0: rtl8139=E2:69:83:AF:14:BD,bridge=vmbr0
ostype: l26
sata0: data:112/vm-112-disk-1.qcow2
sockets: 1

So, the primary import problem is fixed. Thanks!

Some remaining thoughts:
So a few problems:
1. The disk location wasn't overwritten from lili to data, thus making the disk inaccessible, breaking the ability to import a VZDump. -FIXED-
2. The configuration utilities don't like editing disk references.
3. The configuration utilities don't like destroying broken configurations until you delete the offending "missing" disk. (a bug? but easy to work around)
4. There isn't an obvious way to add a disk from an existing file.


For 2 and 3, I didn't repro them on this build since the import worked. Are they still an issue?
For 4, this still seems like needed functionality.

Thanks again for the help.