unable to parse volume ID error restoring from 1.9

mmenaz

Renowned Member
Jun 25, 2009
835
25
93
Northern east Italy
Now everything *seems* to work, but something very scaring is happening, and don't know why.
I've played with 2.0 rc1 restoring some vm at work from bash shell, and everything worked fine.
Today I've backed up all my home vm, installed 2.0 from scratch, copied vm backup in /srv/backup and starting restore.
vzrestore worked fine, but qmrestore did not with the following error for ALL the kvm backups:
Code:
root@proxmox:/srv/backup# qmrestore vzdump-qemu-106-2012_03_25-03_34_34.tgz 106
unable to parse volume ID 'vzdump-qemu-106-2012_03_25-03_34_34.tgz'
I've also tried to specify the storage name, but got the same error:
root@proxmox:/srv/backup/dump# qmrestore --storage local vzdump-qemu-111-2012_03_25-03_59_43.tgz 111
unable to parse volume ID 'vzdump-qemu-111-2012_03_25-03_59_43.tgz'
I've read in a old post
http://forum.proxmox.com/threads/8936-unable-to-parse-volume-ID
to recreate the backup storage as "backup" pointing to /srv/backup, moved my backup under "dump" and restored from web interface.
Everything seems to work but this error scares me a little:
tar: write error
this is the log from web interface of the task:
Code:
extracting archive '/srv/backup/dump/vzdump-qemu-104-2012_03_25-03_30_02.tgz'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-virtio0.raw' from archive
Formatting '/var/lib/vz/images/104/vm-104-disk-1.raw', fmt=raw size=32768
new volume ID is 'local:104/vm-104-disk-1.raw'
restore data to '/var/lib/vz/images/104/vm-104-disk-1.raw' (2147483648 bytes)
tar: write error
2147483648 bytes copied, 41 s, 49.95 MiB/s
TASK OK
this another one:
Code:
extracting archive '/srv/backup/dump/vzdump-qemu-106-2012_03_25-03_34_34.tgz'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-ide0.qcow2' from archive
Formatting '/var/lib/vz/images/106/vm-106-disk-1.qcow2', fmt=qcow2 size=32768 encryption=off cluster_size=65536
new volume ID is 'local:106/vm-106-disk-1.qcow2'
restore data to '/var/lib/vz/images/106/vm-106-disk-1.qcow2' (12383367168 bytes)
tar: write error
920+465344 records in
47238+1 records out
12383367168 bytes (12 GB) copied, 491.913 s, 25.2 MB/s
TASK OK

here my info:
Code:
root@proxmox:~# pveversion -v
pve-manager: 2.0-54 (pve-manager/2.0/4b59ea39)
running kernel: 2.6.32-10-pve
proxmox-ve-2.6.32: 2.0-63
pve-kernel-2.6.32-10-pve: 2.6.32-63
lvm2: 2.02.88-2pve2
clvm: 2.02.88-2pve2
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-33
pve-firmware: 1.0-15
libpve-common-perl: 1.0-23
libpve-access-control: 1.0-17
libpve-storage-perl: 2.0-16
vncterm: 1.0-2
vzctl: 3.0.30-2pve2
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-8
ksm-control-daemon: 1.1-1
root@proxmox:~#
I've a copy of /etc of the previous 1.9 installation, if there are info you need.
Best regards
 
I had the same issue, what I ended up doing was creating a storage dir called Backups in the webui, once you do that a folder called "dump" should be created in that backup folder (if not create one) and put all of the images in there. Then, you can go to your Backups storage, click on content, and your old VM's should be in there. You can click restore on the webui and it should work flawlessly.
 
I had the same issue, what I ended up doing was creating a storage dir called Backups in the webui, once you do that a folder called "dump" should be created in that backup folder (if not create one) and put all of the images in there. Then, you can go to your Backups storage, click on content, and your old VM's should be in there. You can click restore on the webui and it should work flawlessly.

yes, see also https://bugzilla.proxmox.com/show_bug.cgi?id=134
 
Well...

We see bugzilla...

But solution? Roadmap for qemu-server 2.0-35 ?

Anotehr metod for migrate KVM form 1.9?
 
Thanks.

Fail with other message...

qmrestore /mnt/pve/nas01001/vzdump-qemu-412-2012_03_25-06_36_20.tgz 412 -storage nas01001
ipcc_send_rec failed: Connection refused
ipcc_send_rec failed: Connection refused
ipcc_send_rec failed: Connection refused
cluster not ready - no quorum?
 
And after install qemu-server_2.0-35_amd64.deb...

  1. Can't login on https://mg01.domain.com:8006 (root and passwd don't work PAM and verified password)
service vz restart
Bringing down interface venet0: ..done
Stopping OpenVZ: ..done
Starting OpenVZ: ..done
Bringing up interface venet0: ..done
vzctl set 0 --cpuunits 1000 failed: Unable to open /etc/pve/openvz/0.conf: Transport endpoint is not connected
Error in config file /etc/pve/openvz/0.conf..failed
Container(s) not found

d????????? ? ? ? ? ? pve
Reboot and work vz restart but can't login on Web Interface because don't up.


  1. See apache2 down.. start and...
    Code:
    service apache2 restartSyntax error on line 13 of /etc/apache2/sites-enabled/pve-redirect.conf:
    SSLCertificateFile: file '/etc/pve/local/pve-ssl.pem' does not exist or is empty
    Action 'configtest' failed.
    The Apache error log may have more information.
     failed!
  2. Any files on /etc/pve ...
 
Again you cluster does not work. cman is not running. What output do you get with:

# /etc/init.d/cman start
 
/etc/init.d/pve-cluster stop
Stopping pve cluster filesystem: pve-cluster apparently not running.
root@mg01:/var/lib/vz/template# /etc/init.d/pve-cluster start
Starting pve cluster filesystem : pve-cluster[main] crit: Unable to get local IP address
(warning).
 
You need an entry in /etc/hosts, so that your hostname can be resolved. Please can you post both files?

# cat /etc/hostname
# cat /etc/hosts
 
cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
178.33.225.199 ns231231.ovh.net ns231231
# The following lines are desirable for IPv6 capable hosts
#(added automatically by netbase upgrade)
::1 ip6-localhost ip6-loopback
feo0::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

cat /etc/hostname
mg01.islaserver.com


After verifi,,

/etc/init.d/pve-cluster stop
Stopping pve cluster filesystem: pve-cluster
.
root@mg01:/var/lib/vz/template#
root@mg01:/var/lib/vz/template# /etc/init.d/pve-cluster start
Starting pve cluster filesystem : pve-cluster.
root@mg01:/var/lib/vz/template#
 
Yes..
Incredible hosts file .... if hostname it's not same of /etc/hostname all system break...

Now /etc/pve has files, apache2 start, and go to Proxmox 2.0 interface

An qmrestore ...restore...

But... When enter on Web Interface Restore machine it's 403 forbidden...

"NOTES: Error Connection error 403: Forbidden"


qmrestore put VID on older hostname
(original hostanme for ovh it's nsxxxxx. After OVH setup server we change hostname to mg01)
Modify /etc/hosts /etc/hostname /etc/issue /etc/motd

iSpecka 2012-04-03 a la(s) 12.39.12.jpg

A lot of thanks.

I write on knowlegde database.

Code:
qmrestore vzdump-qemu-412-2012_03_25-06_36_20.tgz 412 -storage nas01001extracting archive '/var/lib/vz/dump/vzdump-qemu-412-2012_03_25-06_36_20.tgz'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-virtio0.raw' from archive
Formatting '/mnt/pve/nas01001/images/412/vm-412-disk-2.raw', fmt=raw size=32768 
new volume ID is 'nas01001:412/vm-412-disk-2.raw'
restore data to '/mnt/pve/nas01001/images/412/vm-412-disk-2.raw' (34359738368 bytes)
34359738368 bytes copied, 739 s, 44.34 MiB/s
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!