This statement is not correct !You need a storage which can do snapshots. e.g. lvm-thin, zfs, ceph, etc.
Please give a practical example. Since this has been working exactly the same way here for years. And I have already imported lots of zpools.This statement is not correct !
If you just import an existing ZFS Pool with some datasets, pve is unable to do a snapshot, even if ZFS has snapshot support.
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
dataPool 1.73T 5.39T 128K none
dataPool/local 205G 5.39T 139K none
dataPool/local/mma 205G 5.39T 205G /zPool/mma
dataPool/local/pve 128K 5.39T 128K /pvePool
dataPool/local/users 420M 5.39T 139K /zPool/users
dataPool/local/users/henning 29.0M 5.39T 29.0M /zPool/users/henning
dataPool/local/users/moni 391M 5.39T 391M /zPool/users/moni
dataPool/remote 1.53T 5.39T 181K /rPool
dataPool/remote/Backups 198G 5.39T 198G /rPool/Backups
dataPool/remote/Daten-FAM 2.07G 5.39T 2.07G /rPool/Daten-FAM
dataPool/remote/Daten-SEC 11.2G 5.39T 11.2G /rPool/Daten-SEC
dataPool/remote/Medien 1.19T 5.39T 1.19T /rPool/Medien
dataPool/remote/Projekte 16.8G 5.39T 16.8G /rPool/Projekte
dataPool/remote/Software 120G 5.39T 120G /rPool/Software
dataPool/remote/Tmp 1.91G 5.39T 1.91G /rPool/Tmp
root@pve:~# cat /etc/pve/storage.cfg
dir: local
disable
path /var/lib/vz
content vztmpl,iso,backup
btrfs: local-btrfs
path /var/lib/pve/local-btrfs
content images,rootdir,backup,vztmpl,iso
zfspool: pvePool
pool dataPool/local/pve
content rootdir,images
sparse 1
zfsPool: zPool
pool dataPool/local
sparse 1
Storage Features
... so you can simply set defaults on the parent dataset
mp1: unable to hotplug mp1: zfs error: cannot mount 'dataPool/local/subvol-205-disk-0' no mountpoint set
zfs create dataPool/local/media
zfs set mountpoint=/zPool/media dataPool/local/media
root@pve:~# cat /etc/pve/lxc/205.conf |grep mp0
mp0: zPool:media,mp=/mymedia,acl=1,replicate=0,size=8G
TASK ERROR: unable to parse zfs volume name 'media'
root@pve:~# cat /etc/pve/lxc/205.conf |grep mp0
mp0: /zPool/media,mp=/mymedia,replicate=0
v-machines/backuppc 532G 168G 532G /v-machines/backuppc
arch: amd64
cores: 8
cpuunits: 900
description: ### Backuppc%0AOS%3A Ubuntu 20.04
hostname: backuppc.tux.lan
memory: 1024
mp0: /v-machines/backuppc,mp=/srv/backuppc,replicate=0
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=EE:AE:22:34:DA:16,ip=dhcp,ip6=auto,type=veth
onboot: 1
ostype: ubuntu
rootfs: SSD-secure:subvol-121-disk-0,size=8G
swap: 512
zfs snapshot v-machines/backuppc@blabla
pct snapshot <vmid> <snapname>
This is the way it works. Short ZFS commands, easy to use. If i have such mountpoints, i do all the snapshot via CMD. Here is my documentation about: https://deepdoc.at/dokuwiki/doku.ph...:linux_zfs#zfs_snapshots_und_deren_verwaltungYou are talking about fiddling with the system. An error-prone way.
Of course it's possible to take zfs snapshots this way. But these snapshots are not in sync with other snapshots., nor with the GUI.
You have to shutdown the container. If there is i.e. a btrfs and and a zfs mountpoint, you have to take both snapshots by hand.
Then restart the container.
If you like to rollback you have to do this by hand again.
We like to use the GUI or at least the proxxmox commands...
Code:pct snapshot <vmid> <snapname>
PVE can only (safely) snapshot volumes it manages (and obviously - where the underlying storage supports taking snapshots). if you pass in arbitrary file system paths as bind mounts, those are not managed by PVE, so also cannot be snapshotted (or modified in other ways that require knowledge and complete control over the underlying storage).But again. The pve GUI gives the impression, that snapshots are supported out of the box. But thats true for only very special use cases.
We use essential cookies to make this site work, and optional cookies to enhance your experience.