Backup Problem due to a snapshot problem in LVM2

whinpo

Renowned Member
Jan 11, 2010
140
0
81
My weekly backup didn't work on one Vm.

Code:
Jun 11 22:00:02 neptune3 vzdump[8351]: INFO: starting new backup job: vzdump --quiet --node 12 --snapshot --compress --storage NFSBackup  --all
Jun 11 22:00:02 neptune3 vzdump[8351]: INFO: Starting Backup of VM 105 (qemu)
Jun 11 22:00:04 neptune3 kernel: device-mapper: table: 251:37: snapshot-origin: Cannot get target device
Jun 11 22:00:04 neptune3 kernel: device-mapper: ioctl: error adding target to table
Jun 11 22:00:05 neptune3 vzdump[8351]: ERROR: Backup of VM 105 failed - command 'lvcreate --size 1024M --snapshot --name 'vzsnap-neptune3-0' '/dev/Proxmox-SR/vm-105-disk-1'' failed with exit code 5

Code:
Jun 11 22:00:02 INFO: Starting Backup of VM 105 (qemu)
Jun 11 22:00:03 INFO: running
Jun 11 22:00:03 INFO: status = running
Jun 11 22:00:04 INFO: backup mode: snapshot
Jun 11 22:00:04 INFO: bandwidth limit: 10240 KB/s
Jun 11 22:00:04 INFO:   device-mapper: create ioctl failed: Device or resource busy
Jun 11 22:00:04 INFO:   device-mapper: reload ioctl failed: No such device or address
Jun 11 22:00:04 INFO:   Failed to suspend origin vm-105-disk-1
Jun 11 22:00:05 INFO:   Logical volume "vzsnap-neptune3-0" successfully removed
Jun 11 22:00:05 ERROR: Backup of VM 105 failed - command 'lvcreate --size 1024M --snapshot --name 'vzsnap-neptune3-0' '/dev/Proxmox-SR/vm-105-disk-1'' failed with exit code 5


il fact it seems there was a problem creating the snapshot.
I've already had this problem. It seems there is a problem in mapper/udev/dm whenever you remove a snaphot.(each time you make a backup)
The snapshot remains in mapper and no way to remove it.
I believe it is the same bug as : http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=549691

This bug is known to happen on lvm2/2.02.39-7 and lvm2/2.02.62-1 and we're running the first version

Code:
neptune3:~# dpkg -l | grep lvm
ii  lvm2                              2.02.39-7                The Linux Logical Volume Manager

This bug also causes problem whenever we make snaps of machines for our tests...unable to remove the snaps without either trying to play with dmsetup as said in message #57 (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=549691#57) or simply by rebooting...

The problem is that it seems it even has other symptoms...I've migrated my VM in order to reboot the Proxmox VE ...and then my VM crashed and no way to restart it...I have to restore it ...

That's why I prefered to speak about this problem...

does anybody else have the same problem? any workaround?

Code:
neptune3:~# pveversion -v
pve-manager: 1.5-9 (pve-manager/1.5/4728)
running kernel: 2.6.32-2-pve
proxmox-ve-2.6.32: 1.5-7
pve-kernel-2.6.32-2-pve: 2.6.32-7
pve-kernel-2.6.18-2-pve: 2.6.18-5
qemu-server: 1.1-14
pve-firmware: 1.0-4
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.3-1
ksm-control-daemon: 1.0-3
 
Last edited:
I'm gonna prepare some "before and after" copy/paste with lvs and dmsetup
 
Hi,
I have similar issue. Please find the relevant information:

# cat /var/log/vzdump/qemu-102.log
Jun 30 01:00:02 INFO: Starting Backup of VM 102 (qemu)
Jun 30 01:00:02 INFO: running
Jun 30 01:00:02 INFO: status = running
Jun 30 01:00:03 INFO: backup mode: snapshot
Jun 30 01:00:03 INFO: ionice priority: 7
Jun 30 01:00:04 INFO: device-mapper: create ioctl failed: Device or resource busy
Jun 30 01:00:04 INFO: Failed to suspend origin vm-102-disk-1
Jun 30 01:00:05 INFO: Logical volume "vzsnap-s-host02-0" successfully removed
Jun 30 01:00:05 ERROR: Backup of VM 102 failed - command 'lvcreate --size 1024M --snapshot --name 'vzsnap-s-host02-0' '/dev/pve_vm1/vm-102-disk-1'' failed with exit code 5

# pveversion -v

pve-manager: 1.8-18 (pve-manager/1.8/6070)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.35: 1.8-11
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.35-1-pve: 2.6.35-11
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.27-1pve1
vzdump: 1.2-13
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6


# dmsetup table
pve_vm1-vm--103--disk--1: 0 67108864 linear 8:48 262160896
pve_vm1-vm--103--disk--1: 67108864 41943040 linear 8:48 794853888
pve_local_vm-local_vm: 0 950009856 linear 8:17 384
pve_vm1-vm--102--disk--1: 0 104873984 linear 8:48 1759592960
pve_vm1-vm--101--disk--2: 0 67108864 linear 8:48 794853888
pve_vm1-vm--101--disk--1: 0 67117056 linear 8:48 681599488
pve_vm1-vm--109--disk--1: 0 20971520 linear 8:48 773882368
pve-swap: 0 5242880 linear 8:2 384
pve-root: 0 10485760 linear 8:2 5243264
pve_vm1-vm--107--disk--2: 0 41943040 linear 8:48 836796928
pve-data: 0 19922944 linear 8:2 15729024
pve_vm1-vm--107--disk--1: 0 25174016 linear 8:48 1071677952
pve_vm1-vm--106--disk--1: 0 67108864 linear 8:48 614490624
pve_vm1-vm--102--disk--1-real: 0 104873984 linear 8:48 1759592960
pve_vm1-vm--110--disk--3: 0 209723392 linear 8:48 1268834816
pve_vm1-vm--111--disk--1: 0 67117056 linear 8:48 1478558208
pve_vm1-vm--105--disk--1: 0 8396800 linear 8:48 538984960
pve_vm1-vm--110--disk--2: 0 67117056 linear 8:48 1201717760
pve_vm1-vm--110--disk--1: 0 104865792 linear 8:48 1096851968
pve_vm1-vm--104--disk--1: 0 209715200 linear 8:48 861962752
pve_vm1-vm--103--disk--2: 0 209715200 linear 8:48 329269760


Any help welcome !
oban
 
Exactly same problem here.

3 nodes cluster, iSCSI shared lvm storage.

px1:~# dpkg -l |grep lvm2
ii lvm2 2.02.39-8 The Linux Logical Volume Manager
px1:~#

px1:~# pveversion -v
pve-manager: 1.9-26 (pve-manager/1.9/6567)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.9-50
pve-kernel-2.6.32-6-pve: 2.6.32-55+ovzfix-1
qemu-server: 1.1-32
pve-firmware: 1.0-15
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-3pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-2
ksm-control-daemon: 1.0-6
px1:~#

Backup of VM 204 works from second node only.

From first and third node :

px3:~# cat /var/log/vzdump/qemu-204.log
avr 04 10:19:12 INFO: Starting Backup of VM 204 (qemu)
avr 04 10:19:12 INFO: running
avr 04 10:19:12 INFO: status = running
avr 04 10:19:12 INFO: backup mode: snapshot
avr 04 10:19:12 INFO: ionice priority: 7
avr 04 10:19:13 INFO: device-mapper: create ioctl failed: Device or resource busy
avr 04 10:19:13 INFO: Failed to suspend origin vm-204-disk-1
avr 04 10:19:13 INFO: Logical volume "vzsnap-px3-0" successfully removed
avr 04 10:19:13 ERROR: Backup of VM 204 failed - command 'lvcreate --size 1024M --snapshot --name 'vzsnap-px3-0' '/dev/disque-VMs-1/vm-204-disk-1'' failed with exit code 5
px3:~#



If I try to manually create the snapshot, I get :

px1:/var/lib/vz# lvcreate --size 1024M --snapshot --name 'toto' '/dev/disque-VMs-1/vm-204-disk-1'
device-mapper: create ioctl failed: Périphérique ou ressource occupé
Failed to suspend origin vm-204-disk-1

Anyway, snapshot is here, but I can't remove it :

px1:/var/lib/vz# lvremove '/dev/disque-VMs-1/toto'
Do you really want to remove active logical volume "toto"? [y/n]: n
Logical volume "toto" not removed
Command failed with status code 5.


(I know, it is preferable to lvchange -a n).

Any help welcome...

Christophe.
 
Last edited:
Maybe late, but I hope this can help. I had the same issue as you guys and found the dmsetup very handy. Yet, I discovered by luck another workaround: use /dev/mapper name instead of the symlink.

For exemple, instead of using lvremove /dev/volgroup/logvol, you should use lvremove /dev/mapper/volgroup-logvol. This worked on my system:

OS: Red Hat Enterprise Linux Server release 5.6 (Tikanga)
Kernel: 2.6.18-238.el5 #1 SMP Sun Dec 19 14:22:44 EST 2010 x86_64 x86_64 x86_64 GNU/Linux
package: lvm2-2.02.74-5.el5

Hope this can help others.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!