[SOLVED] LXC Snapshot greyed out on LVM-Thin

CharlesErickT

Member
Mar 15, 2017
52
8
8
31
I have a host version:

Kernel Version
Linux 4.4.44-1-pve #1 SMP PVE 4.4.44-84 (Thu, 9 Mar 2017 12:06:34 +0100)

Where the snapshot feature is greyed out. The containers are on an LXC thin pool so it should work.

The only difference compared to the working hosts is the pve version (4.4.19 VS 4.4.44) and the fact that the hosts is using SSDs (If that makes a difference). The configs are the same and they are all part of the same cluster.

Is there something that could prevent snapshots on this one host ?

Thanks
 
I have a host version:

Kernel Version
Linux 4.4.44-1-pve #1 SMP PVE 4.4.44-84 (Thu, 9 Mar 2017 12:06:34 +0100)

Where the snapshot feature is greyed out. The containers are on an LXC thin pool so it should work.

The only difference compared to the working hosts is the pve version (4.4.19 VS 4.4.44) and the fact that the hosts is using SSDs (If that makes a difference). The configs are the same and they are all part of the same cluster.

Is there something that could prevent snapshots on this one host ?

Thanks

please post the respective container and storage configurations and the complete "pveversion -v" output
 
Here's the pveversion output

Code:
proxmox-ve: 4.4-84 (running kernel: 4.4.44-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-109
pve-firmware: 1.1-10
libpve-common-perl: 4.0-94
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-1
pve-docs: 4.4-3
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-96
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80

Turns out I found the issue. It's seems to be related to the fact that I have bind mounts defined for my container using the following line in my config:
Code:
mp0: /mnt/dev-nas-1,mp=/mnt/Backups

Is there a way to make a snapshot but exclude the folder used in the mount ?

Thanks
 
Hi,

I'm having problems trying to snapshot my newly created Ubuntu LXC container. It was installed using the 17.04 template and then upgraded to 17.10.

I don't have bind mounts that I know of:

Code:
root@ns2xxxx:~# pct config 110
arch: amd64
cores: 4
hostname: ubuntu2018
memory: 8192
net0: name=eth0,bridge=vmbr0,gw=xx.yy.zz.aa,hwaddr=06:11:22:33:44:55,ip=xx.yy.zz.bb/32,type=veth
ostype: ubuntu
rootfs: local:110/vm-110-disk-1.raw,size=50G
swap: 512

Code:
root@ns2xxxx:~# pveversion --verbose
proxmox-ve: 5.1-26 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.17-1-pve: 4.10.17-18
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: not correctly installed
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9

Any other thing to find / test? Thanks!
 
your container uses a raw image on a directory storage as volume, which has no snapshot support. you need to use a storage which supports snapshots for all volumes (e.g., LVM-thin, ZFS, Ceph, ..).
 
your container uses a raw image on a directory storage as volume, which has no snapshot support. you need to use a storage which supports snapshots for all volumes (e.g., LVM-thin, ZFS, Ceph, ..).

Thanks for your reply. I just installed Proxmox 5.0 and then upgraded to 5.1, so I should be using LVM-thin by default, right?

How can I see if I'm using LVM-thin, and if yes, how can I create containers that uses its capabilities?

Thanks!
 
Code:
root@nsxxxxxx:~# pvesm lvmthinscan pve
root@nsxxxxxx:~#

Does this means that I don't have LVM-thin? I installed Proxmox 5 from OVH templates (not the ZFS beta template, but the standard one). I thoght the default was LVM-thin...
 
defaults in a OVH template are not necessary PVE defaults. you should see in your /etc/pve/storage.cfg which storages are currently defined.
 
Code:
root@nsxxxxxx:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content rootdir,images,iso,vztmpl
        maxfiles 0

dir: backup
        path /vz/backup
        content backup
        maxfiles 2
        shared 0

dir: backupDiarios
        path /vz/backupDiarios
        content backup
        maxfiles 7
        shared 0

This is not LVM-thin, right? Any way to "convert" to LVM-thin without reinstalling?
 
depends on how the disks are partitioned, but you are most likely faster if you re-install.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!