Snapshot creation fails

tuxmin

New Member
Feb 13, 2013
16
0
1
Hi all,

I'm running a 2-node cluster of Proxmox VE 2.2 with all the latest patches applied.

I have an iSCSI volume attached an put under LVM control (no direct access possible). Unfortunately whenever I try to create a snapshot of a VM on this storage the task immediately throws an error, e.g.:

TASK ERROR: can't snapshot volume 'lvm-iscsi:vm-210-disk-1'


In the logs it says:

syslog:Feb 13 16:32:42 megasrv02 qm[14704]: <root@pam> starting task UPID:megasrv02:00003971:0009BFA7:511BB21A:qmsnapshot:210:root@pam:
syslog:Feb 13 16:32:42 megasrv02 qm[14704]: <root@pam> end task UPID:megasrv02:00003971:0009BFA7:511BB21A:qmsnapshot:210:root@pam: can't snapshot volume 'lvm-iscsi:vm-210-disk-1'



If I try the qm tool directly I run into the same issue:

root@megasrv02:/var/log# qm snapshot 210 test
can't snapshot volume 'lvm-iscsi:vm-210-disk-1'



Here are the details about th volume group and logical volume:

--- Volume group ---
VG Name vg_images
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 15
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 500.00 GiB
PE Size 4.00 MiB
Total PE 127999
Alloc PE / Size 14336 / 56.00 GiB
Free PE / Size 113663 / 444.00 GiB
VG UUID pzOmFn-wDWp-x48q-FjYm-wc16-YiBS-X3Onqg


--- Logical volume ---
LV Path /dev/vg_images/vm-210-disk-1
LV Name vm-210-disk-1
VG Name vg_images
LV UUID qXDq6E-n20o-BHWg-2meF-OiNi-fq02-5s7Ej8
LV Write Access read/write
LV Creation host, time megasrv02, 2013-01-11 16:17:37 +0100
LV Status available
# open 1
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6


I can manage snaphost manually, though:

root@megasrv02:/var/log# lvcreate -s /dev/vg_images/vm-210-disk-1 -n snap-210-disk-1 -L2G
Logical volume "snap-210-disk-1" created


root@megasrv02:/var/log# lvremove /dev/vg_images/snap-210-disk-1
Do you really want to remove active logical volume snap-210-disk-1? [y/n]: y
Logical volume "snap-210-disk-1" successfully removed




What am I doing wrong? Digging the forum it get the impresseion that the snapshot funcionality is working for everyone but me!?!?


Any
 
also post the output of:
Code:
cat /etc/pve/storage.cfg

qm config [FONT=courier new]210[/FONT]

pveversion -v
 
Hi Tom,

thanks for the quick reply. Here's the requested information:

root@megasrv02:/var/log# cat /etc/pve/storage.cfg
iscsi: iscsi-megabitz
target iqn.2012-12.megaNAS00.megabitz.de:iqn.2013-01.de.megabitz
portal 192.168.129.7
content none

dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0

lvm: lvm-iscsi
vgname vg_images
base iscsi-megabitz:0.0.0.scsi-360014052e50672003b3d002000000000
shared
content images

nfs: nfs-megabitz
path /mnt/pve/nfs-megabitz
server 192.168.129.7
export /c/nfsshare
options vers=3
content images,iso,vztmpl,rootdir,backup
maxfiles 1



root@megasrv02:/var/log# qm config 210
bootdisk: virtio0
cores: 1
ide2: nfs-megabitz:iso/debian-6.0.6-amd64-netinst.iso,media=cdrom,size=168M
memory: 512
name: debian-clone
net0: virtio=E6:B4:A9:7F:EA:7B,bridge=vmbr0
ostype: l26
sockets: 2
virtio0: lvm-iscsi:vm-210-disk-1


root@megasrv02:/var/log# pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1



Best,

alex
 
Live snapshots only works with qcow2 files (or storage engines with snapshot support like rbd, sheepdog or nexenta)
 
Hi Tom and Dietmar,

thanks for your respsonses!
One final question, though. I was reading about sheepdog, Nexenta and rbd and it appears Nexenta is the easiest to setup and 18TB for the community edition is far enough for small to medim sized setups.
One thing that's bothering me with my current central iSCSI setup for a 2 node cluster is that LVM snapshots and especially cloning create a huge network load. If I chose for instance a Nexenta storage engine, could I expect that not even snapshots but even clones are triggered directly on the storage (SAN), thus not putting any load on the network? How about snap clones and other advanced SAN features?


Cheers,

alex
 
Hi Tom,

thanks for pointing this out! Other than that: Does the Proxmox-Nexenta-API support cloning directly on the storage engine?

Cheers,

alex
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!