Cannot create a snapshot of the pve-data lvm?

c0mputerking

Renowned Member
Oct 5, 2011
174
5
83
Hello all i am tring to create a snapshot of my /dev/mapper/pve-data partition so i can move things to RAID1 however i am getting this error. I can do an rsync of /var/lib/vz but I am worried that it may miss something and would prefer a snapshot

lvcreate -s -L 2G -n data-snapshot /dev/mapper/pve-data
FATAL: Error inserting dm_snapshot (/lib/modules/2.6.32-16-pve/kernel/drivers/md/dm-snapshot.ko): Invalid module format
/sbin/modprobe failed: 1
Can't process LV data-snapshot: snapshot target support missing from kernel?
Failed to suspend origin data
 
Maybe you installed a new kernel, and removed the old one, but forgot to reboot?

Please post the output of:

# pveversion -v
 
Yes i think you nailed it this machine does need a reboot, as i am process of converting it to RAID 1. However i lost my nerve about halfway threw the proccess when i realized that it is remote and i should probably be onsite for the reboot. Have to book an appointment for access, so have deciede to never reboot it again :)

Here are the step i am following/building they are not complete or properly edited and mostly borrowed from other posts. I'm at the rsync part which is not going to well at the moment new partition is already almost twice the size of the old one.

http://computerking.ca/raid1-on-existing-system/


The pversion -v
pveversion -v
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-82
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-33
qemu-server: 2.0-69
pve-firmware: 1.0-21
libpve-common-perl: 1.0-39
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1
 
This error is hitting me too. There are 13 machines in this cluster, only 3 of them are doing this. The others all seem to be ok. Before I take steps to fix this, I want to fully understand what is happening to cause this. It's not impacting service at this point.

12 of the 13 machines are running the same versions of everything (the one remaining is running 2.6.32-11-pve and is working fine, waiting for a maintenance window to reboot it). The 3 machines with the problem all show the same output for pveversion -v. I don't think there is anything obviously wrong about this output other than the module won't load:

ProxMox22:root@lgcldr57:~# modinfo dm_snapshot
filename: /lib/modules/2.6.32-16-pve/kernel/drivers/md/dm-snapshot.ko
license: GPL
author: Joe Thornber
description: device-mapper snapshot target
srcversion: 8B0ADB0071DDCBBD2FD8D35
depends:
vermagic: 2.6.32-16-pve SMP mod_unload modversions

ProxMox22:root@lgcldr57:~# modprobe -v dm_snapshot
insmod /lib/modules/2.6.32-16-pve/kernel/drivers/md/dm-snapshot.ko
FATAL: Error inserting dm_snapshot (/lib/modules/2.6.32-16-pve/kernel/drivers/md/dm-snapshot.ko): Invalid module format

ProxMox22:root@lgcldr57:~# pveversion -v
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-82
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-33
qemu-server: 2.0-69
pve-firmware: 1.0-21
libpve-common-perl: 1.0-39
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1


ProxMox22:root@lgcldr57:~# uname -a
Linux lgcldr57 2.6.32-16-pve #1 SMP Mon Oct 22 08:38:13 CEST 2012 x86_64 GNU/Linux


The output is identical on a machine where the dm-snapshot module properly loads:

ProxMox22:root@lgcldr53:~# lsmod | grep dm_
dm_snapshot 31649 0

ProxMox22:root@lgcldr53:~# pveversion -v
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-82
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-33
qemu-server: 2.0-69
pve-firmware: 1.0-21
libpve-common-perl: 1.0-39
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1


Any ideas? As of now, my plan is to shut down the VM, migrate it to a different machine, and start it up. Then I will reboot this Proxmox server and make sure that it can load the dm-snapshot module (the - and _ seem to be interchangable in module names, I didn't realize this before). Once it's able to load that module, I'll reverse the steps and move the VM back, then test to make sure that a backup of the VM doesn't halt the VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!