[SOLVED] Restore VM to CEPH

forest79

Renowned Member
Sep 23, 2015
2
0
66
Hi,

I have three node CEPH cluster with two rdb pool and manualy edited crush map, which divided SSD and SATA disk to CEPH rules.

When I restoring VM from backup to both CEPH pools, it's failed with this error:

Code:
restore vma archive: lzop -d -c /mnt/pve/nas-backup/dump/vzdump-qemu-200-2016_02_18-15_42_29.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp40559.fifo - /var/tmp/vzdumptmp40559
CFG: size: 325 name: qemu-server.conf
DEV: dev_id=1 size: 10737418240 devname: drive-virtio0
CTIME: Thu Feb 18 15:42:32 2016
libust[40597/40597]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)
new volume ID is 'ceph-sata:vm-201-disk-1'
map 'drive-virtio0' to 'rbd:sata/vm-201-disk-1:mon_host=192.168.100.1\:6789;192.168.100.2\:6789;192.168.100.3\:6789:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ceph-sata.keyring' (write zeros = 0)
libust[40562/40562]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)

** (process:40562): ERROR **: can't open file rbd:sata/vm-201-disk-1:mon_host=192.168.100.1\:6789;192.168.100.2\:6789;192.168.100.3\:6789:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ceph-sata.keyring - Unknown driver 'keyring'
/bin/bash: line 1: 40561 Broken pipe             lzop -d -c /mnt/pve/nas-backup/dump/vzdump-qemu-200-2016_02_18-15_42_29.vma.lzo
     40562 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp40559.fifo - /var/tmp/vzdumptmp40559
libust[40624/40624]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)
Removing all snapshots: 100% complete...done.
libust[40654/40654]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)
Removing image: 1% complete...
Removing image: 2% complete...
Removing image: 3% complete...
...
Removing image: 98% complete...
Removing image: 99% complete...
Removing image: 100% complete...done.
temporary volume 'ceph-sata:vm-201-disk-1' sucessfuly removed
TASK ERROR: command 'lzop -d -c /mnt/pve/nas-backup/dump/vzdump-qemu-200-2016_02_18-15_42_29.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp40559.fifo - /var/tmp/vzdumptmp40559' failed: exit code 133

Here is my version's info:

Code:
pveversion -v
proxmox-ve: 4.1-34 (running kernel: 4.2.6-1-pve)
pve-manager: 4.1-10 (running version: 4.1-10/de913a46)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-31
qemu-server: 4.0-52
pve-firmware: 1.1-7
libpve-common-perl: 4.0-46
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-40
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-4
pve-container: 1.0-41
pve-firewall: 2.0-16
pve-ha-manager: 1.0-21
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-6
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie
 
I am having the same issue (Unknown driver 'keyring') when attempting a backup from a ceph stored VM.
Version info -
proxmox-ve: 4.1-37 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-13 (running version: 4.1-13/cfb599fb)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-37
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-32
qemu-server: 4.0-55
pve-firmware: 1.1-7
libpve-common-perl: 4.0-48
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-40
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-5
pve-container: 1.0-44
pve-firewall: 2.0-17
pve-ha-manager: 1.0-21
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie

Paid for Community subscrtion :)
 
same here.

Code:
# pveversion -v
proxmox-ve: 4.1-37 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-13 (running version: 4.1-13/cfb599fb)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-37
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-32
qemu-server: 4.0-55
pve-firmware: 1.1-7
libpve-common-perl: 4.0-48
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-40
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-5
pve-container: 1.0-44
pve-firewall: 2.0-17
pve-ha-manager: 1.0-21
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie

Code:
# ceph version
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
 
same problem:

Code:
# pveversion -v
proxmox-ve: 4.1-37 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-13 (running version: 4.1-13/cfb599fb)
pve-kernel-4.2.8-1-pve: 4.2.8-37
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-32
qemu-server: 4.0-55
pve-firmware: 1.1-7
libpve-common-perl: 4.0-48
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-40
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-5
pve-container: 1.0-44
pve-firewall: 2.0-17
pve-ha-manager: 1.0-21
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
openvswitch-switch: not correctly installed

Code:
# ceph -v
ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
 
We're looking into it and think we may have found the issue. Working on it, thanks for reporting.
 
yes, the issue seems fixed. packages will be uploaded soon to pvetest.
(pve-qemu-kvm (2.5-8))
 
thanks for feedback!