Proxmox Beta 9 - LVM Backend / Snapshots / Clones

Will the new 'Snapshots' feature on the LVM Backend also allow 'Clones' (specifically linked-clones)?
The current implementation would allow that in principle. The snapshot volume is a kind of a linked clone already. A snapshot volume uses the old volume as a backing device so that all blocks that are not in the outermost snapshot will be searched in the chain of backing devices.
 
  • Like
Reactions: Johannes S
hi, I think that the option is not yet exposed in the gui,

but you need to add "external-snapshots 1" in lvm storage option in /etc/pve/storage.cfg


Then, create a lvm disk with qcow2 format (not sure the format is already exposed in the gui too):

qm set <vmid> -scsi0 lvmstorage:<size>,format=qcow2
 
  • Like
Reactions: Johannes S
I think that the option is not yet exposed in the gui,
It is as advanced option like for LVM, but for directory storages one can only set it when adding a new storage, not changing it for existing ones. IIRC this was done to avoid potential confusion with existing qcow2 volumes, which cannot happen on LVM because that did not support qcow2 before.

but you need to add "external-snapshots 1" in lvm storage option in /etc/pve/storage.cfg
FYI: the option got renamed to snapshot-as-volume-chain 1 now, not perfect but IMO a bit closer to what it actually does.
 
  • Like
Reactions: spirit
Thanks for implementing this feature (snapshots on shared thick LVM), Proxmox devs! I know I've had to explain to my colleagues in the past why, if we wanted to experiment with Proxmox to replace our VMWare infrastructure, we'd have to either also experiment with Ceph at the same time (to be able to get live migration and snapshots) or start experimenting with it on shared LUNs on the SAN we already own (but not be able to snapshot VMs).

I tried this today in a small brand-new 9-BETA cluster, and I did get it to work, and I can see the multiple generations of qcow2-named LVs as I take one snap after another. However, when I tried to roll back to a much earlier snap, it refuses me — it seems that it will only roll back to the last snapshot that was created, or at least the last one that still exists now. Is this true, and is it a permanent restriction? Does the entire process only work on a linear chain of snapshots, and the only rollback allowed is one backward, to the final link in the chain? And if I want to roll back several links, I must delete all the later snaps until the one to restore to is now the final one?

Obviously one very common use of VM snaps is to provide just-in-case reverting during a patch window, like
  1. take a snap
  2. try to patch item A
  3. if it goes badly, restore to "1" and try to do A again or differently
  4. take another snap
  5. try to patch item B
  6. again, if it blows up, restore to "4"
  7. take another snap
  8. try to patch C
  9. if it blows up, restore to "7"
  10. success, patching all items is done! so now delete all the snaps
So PVE's new capability will support this workflow. But sometimes the snapshots are not all in one linear flow; sometimes I'm trying to troubleshoot some very complex set of interactions, and I want to be able to take snapshots 1, 2, 3, 4, and then temporarily roll back to "1" to try something else while keeping the 2,3,4 ones around, then forget that little distraction and again roll back (roll forward? restore?) to "4" and keep going. Or roll back to "2" and then start another branch from there, making 5, 6, 7 that are descended from "2" while 3,4 continue to be descended from "2". This sort of branched workflow is definitely supported in the VMWare world, but I've never really tried to do it in the PVE world, so I wonder if it's not supported at all? Or currently only supported on the file-based storages (like QCOW2-stored images on NFS)? Or supported in shared Ceph (either Ceph RBD or CephFS)?

Thanks very much for your time. I know end users are never satisfied, so no matter what new brilliant things you create, there will be someone complaining about it. Just know that I think y'all are amazing, you've made an incredible product, and I am trying over years to convince my colleagues we should move off VMWare, and I'm still trying.
 
theoretically snapshot "trees" would be possible, but the handling becomes much more complex, in particular the failure scenarios in case an operation cannot be fully completed are messy - that's why we kept it simple for now. this might be revisited at a later point, but no promises.
 
theoretically snapshot "trees" would be possible, but the handling becomes much more complex, in particular the failure scenarios in case an operation cannot be fully completed are messy - that's why we kept it simple for now. this might be revisited at a later point, but no promises.
Understood, thank you, Fabian. Pardon my ignorance, but do snapshot trees currently not work in thick LVM, but already work for other storage models, like Ceph RBD, CephFS, or directories over NFS? Or is this a general constraint in Proxmox more broadly, chains-but-not-branched-trees, and you're considering whether to tackle it in the snapshotting logic overall? Also I suppose the answer could be different for LXC containers versus KVM VMs, but I'm only concerned about full VMs here.
 
it depends on the storage plugin - thick LVM without qcow2 has no snapshot support at all in PVE. ZFS has the same limitation with rollbacks only working for the most recent snapshot. LVM thin, "internal" qcow2 on directory-based storages and IIRC Ceph all don't have that restriction.
 
  • Like
Reactions: LnxBil