LXC Snapshot no available yet ? Will it work on LVM ?

I tried reinstalling my host server without using LVM (just using software md RAID and an ext4 partition), but LXC snapshots do not seem to work either.
Is the LVM thin mode the only way to get it to work ??
My lsblk output :
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
|-sda1 8:1 0 1004.5K 0 part
|-sda2 8:2 0 19.5G 0 part
| `-md2 9:2 0 19.5G 0 raid1 /
|-sda3 8:3 0 1023M 0 part [SWAP]
`-sda4 8:4 0 1.8T 0 part
`-md4 9:4 0 1.8T 0 raid1 /var/lib/vz
sdb 8:16 0 1.8T 0 disk
|-sdb1 8:17 0 1004.5K 0 part
|-sdb2 8:18 0 19.5G 0 part
| `-md2 9:2 0 19.5G 0 raid1 /
|-sdb3 8:19 0 1023M 0 part [SWAP]
`-sdb4 8:20 0 1.8T 0 part
`-md4 9:4 0 1.8T 0 raid1 /var/lib/vz
loop0 7:0 0 90G 0 loop
 
I got this working if anyone is still interested. This is a test box that just happened to have an unused disk, so I'm trying to use it all. I'm not sure about juggling free space around to try to do some kind of in place conversion for you other guys.

Code:
root@kvm-1:~# pvcreate /dev/mapper/sdb1_crypt
  Physical volume "/dev/mapper/sdb1_crypt" successfully created
root@kvm-1:~# vgcreate pve-test /dev/mapper/sdb1_crypt
  Volume group "pve-test" successfully created
root@kvm-1:~# lvcreate -l100%VG --thinpool thin-pool pve-test
  Insufficient suitable allocatable extents found for logical volume thin-pool.
root@kvm-1:~# lvcreate -l99%VG --thinpool thin-pool pve-test
  Logical volume "thin-pool" created.

You can see that lvcreate failed when I tried to use 100%. I don't know why. 99% worked. *shrug*

I added this to my storage.cfg:
Code:
lvmthin: thin-pool
    vgname pve-test
    thinpool thin-pool

When I switched back to firefox the gui had already updated and things looked right but creates failed because I was missing a package called thin-provisioning-tools. I'm not sure if it's in the ISO but the instructions for installing proxmox from jessie did not install it.

Anyway, I can create containers on my thin-pool. I took a snapshot, changed some stuff, and then rolled back and it seemed to work but I haven't really tested anything and I'm not sure if I'm ultimately going to use this or zfs for containers (or if I even like lxc containers).
 
Anyway, I can create containers on my thin-pool. I took a snapshot, changed some stuff, and then rolled back and it seemed to work but I haven't really tested anything and I'm not sure if I'm ultimately going to use this or zfs for containers (or if I even like lxc containers).

Does restoring containers from backups to lvm-thin work? The storage did show up for me as well, but both create and restore gave an error.
 
Sorry, I guess I skimmed the thread. I thought you guys were getting stuck just creating the thin pool. But yes, restores seem to work, too.

I used the gui to create a backup (snapshot) of my thin-test container from yesterday and then ran through this:
Code:
root@kvm-1:~# pct destroy 105
unable to destroy CT 105 - container is running
root@kvm-1:~# pct stop 105
root@kvm-1:~# pct destroy 105
  Logical volume "vm-105-disk-1" successfully removed
root@kvm-1:~# pct restore 105 /mnt/pve/mdnas-1/dump/vzdump-lxc-105-2016_02_25-05_56_24.tar.lzo -storage thin-pool
  Logical volume "vm-105-disk-1" created.
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID: 120a7744-de69-490a-b11d-3c4ce97e772e
Superblock backups stored on blocks:
  32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/mnt/pve/mdnas-1/dump/vzdump-lxc-105-2016_02_25-05_56_24.tar.lzo'
Total bytes read: 558991360 (534MiB, 204MiB/s)
Detected container architecture: amd64
root@kvm-1:~# pct start 105
root@kvm-1:~# pct config 105 | grep root
rootfs: thin-pool:vm-105-disk-1,size=8G

I had a simple ping to the container running in another terminal. Traffic was impacted at stop (obviously) and restored at start.

Does your error look something like this?
Code:
/usr/sbin/thin_check: execvp failed: No such file or directory
WARNING: Integrity check of metadata for pool pve-test/thin-pool failed.
/usr/sbin/thin_check: execvp failed: No such file or directory
Check of pool pve-test/thin-pool failed (status:2). Manual repair required!
TASK ERROR: lvcreate 'pve-test/vm-105-disk-1' error: Failed to activate thin pool pve-test/thin-pool.

/usr/sbin/thin_check is from the thin-provisioning-tools package. After I installed it things just worked for me.
 
  • Like
Reactions: gkovacs
Hello,
after installing thin-provisioning-tools package snapshots could be created!

I tried an offline migration of an LXC container on a non-shared storage. Unfortunately it did not work arguing: "source type 'lvmthin' not implemented":
Code:
Apr 11 10:12:11 starting migration of CT 103 to node 'internox01' (10.0.1.31)
Apr 11 10:12:11 copy mointpoint 'rootfs' (linux_containers:vm-103-disk-1) to node ' internox01'
Apr 11 10:12:11 ERROR: unable to migrate 'linux_containers:vm-103-disk-1' to 'linux_containers:vm-103-disk-1' on host '10.0.1.31' - source type 'lvmthin' not implemented
Apr 11 10:12:11 aborting phase 1 - cleanup resources
Apr 11 10:12:11 start final cleanup
Apr 11 10:12:11 ERROR: migration aborted (duration 00:00:01): unable to migrate 'linux_containers:vm-103-disk-1' to 'linux_containers:vm-103-disk-1' on host '10.0.1.31' - source type 'lvmthin' not implemented
TASK ERROR: migration aborted

The behaviour is independent of the existence/inexistente of snapshots for the container.

pveversion -v
Code:
proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-15 (running version: 4.1-15/8cd55b52)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-39
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-33
qemu-server: 4.0-62
pve-firmware: 1.1-7
libpve-common-perl: 4.0-49
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-42
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-9
pve-container: 1.0-46
pve-firewall: 2.0-18
pve-ha-manager: 1.0-24
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
fence-agents-pve: not correctly installed

Best regards
yarick123
 
Hello,

I have upgraded my systems to pve 4.2, but containers, created in the manually created lvmthin storage still do not want to migrate. The error message is the same:

May 06 13:58:27 ERROR: unable to migrate 'linux_containers:vm-104-disk-1' to 'linux_containers:vm-104-disk-1' on host '10.0.1.32' - source type 'lvmthin' not implemented


Should I delete the storage and recreate it from the proxmox UI ? Maybe offline migration on lvmthin storage type has just not been implemented in pve 4.2?

Regards
yarick123

proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-41
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
fence-agents-pve: not correctly installed
 
Hi, I have installed proxmox 4.2.

root@proxmox-web16:/var/lib/vz# pveversion
pve-manager/4.2-2/725d76f0 (running kernel: 4.4.6-1-pve)

I have created the lxc debian 8 with lxc and I can create snapshots for the lxc. But I cannot create snapshots in lxc migrate the openvz:

http://imgur.com/a/8JhVV

And the lvm thin disk ok and enable:

root@proxmox-web16:/var/lib/vz# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 245.24g 5.80 3.24
root pve -wi-ao---- 10.00g
swap pve -wi-ao---- 8.00g
vm-816-disk-1 pve Vwi-aotz-- 200.00g data 6.72
vm-888-disk-1 pve Vwi-a-tz-- 8.00g data 9.79

Why cannot create the snapshots in lxc migrate from the openvz?

Thanks in advanced.
 
I have a latest version. Four days ago...

# apt-get update; apt-get dist-upgrade

# reboot

And:

........

Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@proxmox-web16:~# pveversion
pve-manager/4.2-2/725d76f0 (running kernel: 4.4.6-1-pve)

I love snapshots and I need this feature with lvm-thin....
 
that is not a current version. please check your repository configuration, actually upgrade to the current version and then try again ;)
 
I haven't the proxmox subscription repository. But, the latest iso is 4.2-725d76f0-28

I have this version.
 
Please update to latest version and try again.

Hello Dietmar, thank you!

I have just upgraded the cluster to the latest version
Code:
proxmox-ve: 4.2-54 (running kernel: 4.4.10-1-pve)
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)

Container (with snapshots on lvm-thin) migration works well !!

Best regards
yarick123
 
Unfortunately in the new version (proxmox-ve: 4.2-56) migration of volumes with snapshots does not work again:

Jul 15 23:31:24 starting migration of CT 103 to node 'internox01' (10.0.1.31)
Jul 15 23:31:24 copy mountpoint 'rootfs' (linux_containers:vm-103-disk-1) to node ' internox01'
Jul 15 23:31:24 ERROR: can't migrate snapshot of local volume 'linux_containers:vm-103-disk-1'
Jul 15 23:31:24 aborting phase 1 - cleanup resources
Jul 15 23:31:24 start final cleanup
Jul 15 23:31:24 ERROR: migration aborted (duration 00:00:01): can't migrate snapshot of local volume 'linux_containers:vm-103-disk-1'
TASK ERROR: migration aborted

proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve)
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-1-pve: 4.4.13-56
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.10-1-pve: 4.4.10-54
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-42
qemu-server: 4.0-83
pve-firmware: 1.1-8
libpve-common-perl: 4.0-70
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-55
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-19
pve-container: 1.0-70
pve-firewall: 2.0-29
pve-ha-manager: 1.0-32
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
fence-agents-pve: not correctly installed

Regards
yarick123
 
migrating snapshots of local disks is only possible with zfs (because with zfs send/receive snapshots are also transferred).
 
Hi Fabian, thank you for the answer!

migrating snapshots of local disks is only possible with zfs (because with zfs send/receive snapshots are also transferred).

It is a a pity! I suppose, it worked in proxmox-ve 4.2-54.

Is it absolutely not possible to have this functionality with lvm-thin?

If it is impossible, the feature to create snapshots on lvm-thin seems to me not really logical: if a snapshot is created, the container will not be able to be migrated anymore until the snapshot is deleted :(

Regards
yarick123
 
Hi Fabian, thank you for the answer!



It is a a pity! I suppose, it worked in proxmox-ve 4.2-54.

no, that never worked

Is it absolutely not possible to have this functionality with lvm-thin?

until someone implements something like zfs send / receive for LVM (thin)

If it is impossible, the feature to create snapshots on lvm-thin seems to me not really logical: if a snapshot is created, the container will not be able to be migrated anymore until the snapshot is deleted :(

if you need both, use a shared storage with snapshotting capability or zfs.
 
that is not a current version. please check your repository configuration, actually upgrade to the current version and then try again ;)
Hi. I'm testing for upgrading from 3.x to 4.x in a vm. I've migrated the containers and now the only problem is the snapshot mode in backups not working, falling back to suspend. I'm using a single node, and these are the current versions:

proxmox-ve: 4.2-60 (running kernel: 4.4.15-1-pve)
pve-manager: 4.2-17 (running version: 4.2-17/e1400248)
pve-kernel-4.4.15-1-pve: 4.4.15-60
lvm2: 2.02.116-pve2

BTW, I'm using debian jessie in the host with the non-suscription repos and 64 bit arch. I've tested both lvm and lvm thin for the rootfs and vzdump doesn't seem to have support for snapshots in lvm (unlike in 3.x). I've created a simple script to try to mimic at least the "data stage" backup with a custom prefix insted of "openvz" just to make the difference. Here's the bash script for that:


#!/bin/bash
PREFIX="ozback"
BACKUPDIR="/var/lib/vz/backups/dump"
for lxc in $@;do
config=$(pct config $lxc)
if [[ $? == 0 ]];then
rootfs=$(echo "$config"|grep ^rootfs|awk '{ print $2 }'|cut -d, -f1)
echo "lxc ($lxc), rootfs ($rootfs)"
DATETIME=$(date +"%Y%m%d%H%M%S")
BASE="${PREFIX}-${lxc}-${DATETIME}"
LOGFILE="${BACKUPDIR}/${BASE}.log"
TEMPDIR="${BACKUPDIR}/${BASE}.tmp"
lvm lvs /dev/pve/snap 2>& 1 > $LOGFILE
if [[ $? == 0 ]];then
lvm lvremove -f /dev/pve/snap
fi
lvm lvcreate -s -L 5G -n snap $rootfs >> $LOGFILE
mkdir $TEMPDIR
mount /dev/pve/snap $TEMPDIR >> $LOGFILE
pushd $TEMPDIR >> $LOGFILE
time tar zcf ../${BASE}.tar.gz . >> $LOGFILE
popd >> $LOGFILE
umount /dev/pve/snap
lvm lvremove -f /dev/pve/snap
rmdir $TEMPDIR
else
echo "Lxc $lxc does noy exist"
fi

Of course, this script (I'm not very good at perl) is just a test to show that it can be done and of course can be improved. Still no idea why the opposite assumption about what had always been working, now doesn't work anymore. By the moment, I'll have to use this script as a base for system backups.

Any idea or suggestion is welcome.

Regards.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!