LXC Snapshot no available yet ? Will it work on LVM ?

Plain LVM snapshots are not supported in 4.x (they have a lot of issues). LVM-thin supports snapshots (for backups and otherwise). Could you post the config of the VM/CT in question and the output of pveversion -v?
 
Sorry for the delay. Taking a look at your answer, I found one thing that didn't think as a part of the problem: the config. So, as rootfs I was using /dev/pve/vm-XXX, which worked for running the container, but vzdump seems to expect a <storage>:<disk> notation (and <disk> can't be "vm-XXX", but "vm-XXX-<something>"), and then it works. Of course, I had to lvrename all the disks and change the config to match the new name. BTW, it would be great to have the chance in the UI to set <something> to something else than "disk-1". In my case, I've chosen "rootfs", because I usually have mount points, and want the system (rootfs) separated from data stored in each container, in order to reduce the backup time and file size.

Hope this helps. Thanks a lot.
 
Hi, again. I'm back not for snapshot mode in backups, which works, but because for the "take a snapshot" feature just to go back to a previous state. While all containers are stored and backup snapshots working in a lvm thin pool, there are just a few where the "take a snapshot" feature is working, but others not. I'm not able to find the relationship why I can't take a snapshot of most containers, getting a "snapshot feature is not available" error.

What are the requisites for the "take a snapshot" feature can work? TIA
 
You need a storage which can do snapshots. e.g. lvm-thin, zfs, ceph, etc.
 
please post "pveversion -v", your storage configuration ("/etc/pve/storage.cfg") and the configuration of one of the failing containers ("pct config XXX" where XXX is the ID)
 
OK, here is what you request:

# pveversion -v
proxmox-ve: 4.2-64 (running kernel: 4.4.15-1-pve)
pve-manager: 4.2-18 (running version: 4.2-18/158720b9)
pve-kernel-3.19.8-1-pve: 3.19.8-3
pve-kernel-4.4.15-1-pve: 4.4.15-60
pve-kernel-4.4.16-1-pve: 4.4.16-64
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-44
qemu-server: 4.0-86
pve-firmware: 1.1-9
libpve-common-perl: 4.0-72
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-57
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.1-2
pve-container: 1.0-73
pve-firewall: 2.0-29
pve-ha-manager: 1.0-33
ksm-control-daemon: not correctly installed
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.4-1
lxcfs: 2.0.3-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8

# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
maxfiles 0
content iso,images,rootdir,vztmpl

dir: backups
path /var/lib/vz/vzdump
maxfiles 3
content backup
shared

lvmthin: thin
thinpool thin
vgname pve
content rootdir,images

One of the many non-working containers for snapshot:
# pct config 122
arch: amd64
cpulimit: 2
cpuunits: 1024
hostname: pads
memory: 4096
mp0: /var/lib/vz/svn/122,mp=/var/backups/svn
net0: name=eth0,bridge=vmbr0,gw=10.38.52.1,hwaddr=6A:F0:3C:66:70:4B,ip=10.38.52.122/24,type=veth
net1: name=eth1,bridge=vmbr1,hwaddr=76:B7:FC:FA:4B:01,ip=192.168.52.122/24,type=veth
ostype: debian
rootfs: thin:vm-122-rootfs
swap: 2048

Anyway, it would be nice to have better info of all the requirements (i.e., what "pct snapshot" checks) that lead to determine that "snapshot feature is not available".

Regards.
 
"pct snapshot" simply checks if all the configured mountpoints (including rootfs) support snapshots. "supports snapshots" means that it is configured with a storage plugin that has the snapshot feature available in proxmox.

in this case, "mp0: /var/lib/vz/svn/122,mp=/var/backups/svn" is a bind mount, which cannot support snapshots (it's a bind mount, so we don't interact with the storage layer itself, there is no storage plugin that is used). there is an open enhancement request in our bug tracker to allow excluding bind mounts from snapshots (https://bugzilla.proxmox.com/show_bug.cgi?id=1007), but this has not yet been implemented.
 
Ok, then the workaround is easy: backup the conf file, comment the mount points (mp) in the original and that's all. Then, when the snapshot is not necessary, restore the mps or the backup file. I've just tested and it works.
 
This statement is not correct !
If you just import an existing ZFS Pool with some datasets, pve is unable to do a snapshot, even if ZFS has snapshot support.
Please give a practical example. Since this has been working exactly the same way here for years. And I have already imported lots of zpools.
 
Ok,
lets assume we just did an import of an existing dataset

Code:
root@pve:~# zfs list
NAME                           USED  AVAIL     REFER  MOUNTPOINT
dataPool                      1.73T  5.39T      128K  none
dataPool/local                 205G  5.39T      139K  none
dataPool/local/mma             205G  5.39T      205G  /zPool/mma
dataPool/local/pve             128K  5.39T      128K  /pvePool
dataPool/local/users           420M  5.39T      139K  /zPool/users
dataPool/local/users/henning  29.0M  5.39T     29.0M  /zPool/users/henning
dataPool/local/users/moni      391M  5.39T      391M  /zPool/users/moni
dataPool/remote               1.53T  5.39T      181K  /rPool
dataPool/remote/Backups        198G  5.39T      198G  /rPool/Backups
dataPool/remote/Daten-FAM     2.07G  5.39T     2.07G  /rPool/Daten-FAM
dataPool/remote/Daten-SEC     11.2G  5.39T     11.2G  /rPool/Daten-SEC
dataPool/remote/Medien        1.19T  5.39T     1.19T  /rPool/Medien
dataPool/remote/Projekte      16.8G  5.39T     16.8G  /rPool/Projekte
dataPool/remote/Software       120G  5.39T      120G  /rPool/Software
dataPool/remote/Tmp           1.91G  5.39T     1.91G  /rPool/Tmp

To make it visible inside the pve GUI create some entry's in storage.cfg
Code:
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        disable
        path /var/lib/vz
        content vztmpl,iso,backup

btrfs: local-btrfs
        path /var/lib/pve/local-btrfs
        content images,rootdir,backup,vztmpl,iso

zfspool: pvePool
        pool dataPool/local/pve
        content rootdir,images
        sparse 1

zfsPool: zPool
        pool dataPool/local
        sparse 1

Now lets try to create a new mountpoint/dataset for an existing unprivileged LXC container.
Following the wiki, it is only optional but not required to set defaults on the parent dataset.
Code:
Storage Features
... so you can simply set defaults on the parent dataset

But if you try to create a mountpoint via GUI you will get the following error:
Code:
mp1: unable to hotplug mp1: zfs error: cannot mount 'dataPool/local/subvol-205-disk-0' no mountpoint set

And of course, my intention was not to create a subvolume or a raw image, but only a simple dataset.

So let's create it by hand and set the mountpoint
Code:
zfs create dataPool/local/media
zfs set mountpoint=/zPool/media dataPool/local/media

Now we need to add the corresponding mount entry to the container
Code:
root@pve:~# cat /etc/pve/lxc/205.conf |grep mp0
mp0: zPool:media,mp=/mymedia,acl=1,replicate=0,size=8G

But ups, the container did not boot anymore:
Code:
TASK ERROR: unable to parse zfs volume name 'media'

Ok, according the wiki, pve only supports "rootdir,images" as content type. No dataset's !!!
So there is no way to set a native zfs dataset as mountpoint, exept as bind-mount.
Code:
root@pve:~# cat /etc/pve/lxc/205.conf |grep mp0
mp0: /zPool/media,mp=/mymedia,replicate=0

But when doing this there is no way to snapshot the lxc container anymore, reagardless if the underlaying file system like btrfs or zfs is able to do snapshots or not.

Conclusion:
PVE did not support snapshotting of zfs datasets, since "dataset" is not a supported "type".

That wouldn't be so bad if it would be possible to ignore any bind mounts for snapshotting. But no solution for more than 5 Years.

So I had to remove the mountpoint, take a snapshot and restore the mountpoint again in container config :(

Henning
 
Something must be different with you. I've been doing it that way for years. Also with the customer.

My dataset:
Code:
v-machines/backuppc                        532G   168G      532G  /v-machines/backuppc

My vm config:
Code:
arch: amd64
cores: 8
cpuunits: 900
description: ### Backuppc%0AOS%3A Ubuntu 20.04
hostname: backuppc.tux.lan
memory: 1024
mp0: /v-machines/backuppc,mp=/srv/backuppc,replicate=0
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=EE:AE:22:34:DA:16,ip=dhcp,ip6=auto,type=veth
onboot: 1
ostype: ubuntu
rootfs: SSD-secure:subvol-121-disk-0,size=8G
swap: 512
It is running normal, and i can do snapshots via
Code:
zfs snapshot v-machines/backuppc@blabla
 
You are talking about fiddling with the system. An error-prone way. :(

Of course it's possible to take zfs snapshots this way. But these snapshots are not in sync with other snapshots., nor with the GUI.

You have to shutdown the container. If there is i.e. a btrfs and and a zfs mountpoint, you have to take both snapshots by hand.
Then restart the container.
If you like to rollback you have to do this by hand again.

We like to use the GUI or at least the proxxmox commands...
Code:
pct snapshot <vmid> <snapname>
 
You are talking about fiddling with the system. An error-prone way. :(

Of course it's possible to take zfs snapshots this way. But these snapshots are not in sync with other snapshots., nor with the GUI.

You have to shutdown the container. If there is i.e. a btrfs and and a zfs mountpoint, you have to take both snapshots by hand.
Then restart the container.
If you like to rollback you have to do this by hand again.

We like to use the GUI or at least the proxxmox commands...
Code:
pct snapshot <vmid> <snapname>
This is the way it works. Short ZFS commands, easy to use. If i have such mountpoints, i do all the snapshot via CMD. Here is my documentation about: https://deepdoc.at/dokuwiki/doku.ph...:linux_zfs#zfs_snapshots_und_deren_verwaltung
 
I'm using similar functions on my manjaro machines for years now.

But again. The pve GUI gives the impression, that snapshots are supported out of the box. But thats true for only very special use cases.
Since pve is not only used by professionals, I did not see, why there is no simple switch to exclude mountpoints from snapshotting for years now.
In fact this option would be the same as -stopping a container, make mountpints a comment, taking a snapshot "pct snapshot" and removing the comment afterwards.

And the statement about zfs snapshots is only correct, if you specify the conditions for the statement.
But without the conditions the statement is simply wrong, since we are talking about pve in a pve forum ...
 
But again. The pve GUI gives the impression, that snapshots are supported out of the box. But thats true for only very special use cases.
PVE can only (safely) snapshot volumes it manages (and obviously - where the underlying storage supports taking snapshots). if you pass in arbitrary file system paths as bind mounts, those are not managed by PVE, so also cannot be snapshotted (or modified in other ways that require knowledge and complete control over the underlying storage).
 
@fabian
I know. I only said that the statement "You need a storage which can do snapshots. e.g. lvm-thin, zfs, ceph, etc." without specifying the conditions is not correct.

An of course I find it sad, that for about 6 years there is an open feature request.
Analogous: "Add parm to exclude mountpoints from snapshotting (regardless of file system type). Instead of fully prevent pve from doing a snapshot i.e. with bind mounts"
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!