Restoring to CEPH

nz_monkey

Renowned Member
Jan 17, 2013
61
0
71
Hi,

I just got my Proxmox VE cluster up and running with ceph. I just tried migrating one of my VM's from local storage to RBD by taking a backup, then restoring it to a new RBD based VM. This unfortunately does not work, the error I get is:

Code:
extracting archive '/var/lib/vz/dump/vzdump-qemu-101-2013_01_17-14_17_52.tar.lzo'extracting 'qemu-server.conf' from archive
extracting 'vm-disk-virtio0.raw' from archive
new volume ID is 'ceph:vm-103-disk-1'
restore data to 'rbd:rbd/vm-103-disk-1:id=admin:auth_supported=cephx\;none:keyring=/etc/pve/priv/ceph/ceph.keyring:mon_host=10.8.1.1\:6789\;10.8.1.2\:6789\;10.8.1.3\:6789' (21474836480 bytes)
unable to open file 'rbd:rbd/vm-103-disk-1:id=admin:auth_supported=cephx\;none:keyring=/etc/pve/priv/ceph/ceph.keyring:mon_host=10.8.1.1\:6789\;10.8.1.2\:6789\;10.8.1.3\:6789' - No such file or directory
tar: vm-disk-virtio0.raw: Cannot write: Broken pipe

When will backup/restore using rbd be supported in Proxmox VE ?


The ability to do "live block migrations" as supported by QEMU 1.3 would also be a very welcome addition :)
 
You can do it manually for old archives. For images, just untar archive and do rbd import. Then get conf, rename to some free VMID, place in /etc/pve/qemu-server/ and fix parameters inside.
 
You can do it manually for old archives. For images, just untar archive and do rbd import. Then get conf, rename to some free VMID, place in /etc/pve/qemu-server/ and fix parameters inside.

Thank You for reply,
I resolve in another way:
- add new HDD SATA to one node of the clustrer;
- add to that node a new storage "directory" using the new HDD;
- restore VM into that storage;
- live move the disk to ceph storage;
- remove the HDD SATA.

Lorenzo