Problem to restore backed up VMs to Ceph

ewuewu

Renowned Member
Sep 14, 2010
61
0
71
Hamburg
Hello Forum,

I have a problem restoring VMs form an old Proxmox Cluster (2.2-32) to a new one (Vers. 3.4-6) with Ceph.

The VMs are backups from the old cluster, residing on a NFS share. If I try to restore these VMs directly to Ceph I get errors like:

Code:
can't lock file '/var/lock/qemu-server/lock-151.conf' - got timeout (500)
extracting archive '/mnt/pve/qnap-proxmox/dump/vzdump-qemu-151-2015_06_26-13_34_21.tar.lzo'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-ide0.raw' from archive
new volume ID is 'vm_pool:vm-152-disk-1'
restore data to 'rbd:val-pool/vm-152-disk-1:mon_host=192.168.51.52\:6789;192.168.51.51\:6789;192.168.51.50\:6789:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/vm_pool.keyring' (45101350912 bytes)
unable to open file 'rbd:val-pool/vm-152-disk-1:mon_host=192.168.51.52\:6789;192.168.51.51\:6789;192.168.51.50\:6789:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/vm_pool.keyring' - No such file or directory
tar: vm-disk-ide0.raw: Cannot write: Broken pipe

If I restore them to a local Disk (configured for images and dumps) on my Proxmox node first and move them to Ceph RBD afterwards everything works fine.

Why can’t I restore the VM’s to the Ceph drive directly?
 
Hello I've tried restore on several ways. Restore to local disk (directory) and to nfs share is working fine but a direct restore to Ceph share is still failing.

Restore to a local disk first and moving the VM to Ceph afterwards is a little bit anoying, cause it prox. doubles the time taken for restore.

Any ideas?