zfs does not allow non-linear snapshot restoration, which is a bummer. qcow2 is slower than zfs and harder on SSDs, which is a bummer.
I am considering working around this by converting anything I feel likely to need non-linear snapshots to qcow2, doing the work, then backing up the box and immediately stopping it and restoring it but selecting my zfs system for the "storage option".
Is there a better idea? Is the performance gained by using zfs volumes for the disks really worth not just using qcow2 all the time?
Usually I would want this under fairly predictable conditions, if I were trying something totally new, if something seems hopelessly broken, or if I'd already used my one zfs snapshot and realize that's not going to be enough (and know what I've done since then so don't mind rewinding and starting over). In all cases, VM downtime is not much of an issue.
What I have now is a proxmox-multiuse directory on my zfs (called BigDrive here) which I made like this (thanks to some forum help: https://forum.proxmox.com/threads/add-backup-ability-to-main-storage-drive-newbie-question.145598/):
zp=BigDrive
myds=proxmox-multiuse
zfs create -o \
atime=off -o compression=lz4 -o recordsize=1024k \
$zp/$myds || exit 99
(Then: Datacenter -> Storage -> Add -> Directory)
This allows me to put .qcow2 .raw or .vmdk files into BigDrive/proxmox-multiuse (BigDrive is otherwise just zfs to me).
On determining that a box needed heavy enough lifting to require non-linear snapshot restoration I would run:
mkdir /BigDrive/proxmox-multiuse/images/100
qemu-img convert -O qcow2 -f raw /dev/zvol/BigDrive/vm-100-disk-0 /BigDrive/proxmox-multiuse/images/100/vm-100-disk-0.qcow2
qm disk rescan --vmid 100
I would probably use the GUI to then stop the VM, detach the original disk, attach the new one, check the boot order, and start the VM again, I guess this too would do it:
qm config 100 | grep size
(to determine the drive type and number)
qm stop 100
qm set 100 --scsi0 proxmox-multiuse:100/vm-100-disk-0.qcow2
qm start 100
Once I'm done doing my snapshots and rolling back indiscriminately, I would then backup my VM:
The VM -> Backup -> Backup Now (check "Protected") -> Backup
Then stop the VM and "Restore" the backup but with the original BigDrive (not proxmox-multiuse) selected for "Storage" and start the VM.
It's cumbersome and it's not perfect since I'm now starting with a box that's been rebooted twice since I started dealing with whatever the problem is (which in certain cases might itself be an issue), but that's best idea I have so far.
I just wonder if there's a better way?
Thank you.
RJD
I am considering working around this by converting anything I feel likely to need non-linear snapshots to qcow2, doing the work, then backing up the box and immediately stopping it and restoring it but selecting my zfs system for the "storage option".
Is there a better idea? Is the performance gained by using zfs volumes for the disks really worth not just using qcow2 all the time?
Usually I would want this under fairly predictable conditions, if I were trying something totally new, if something seems hopelessly broken, or if I'd already used my one zfs snapshot and realize that's not going to be enough (and know what I've done since then so don't mind rewinding and starting over). In all cases, VM downtime is not much of an issue.
What I have now is a proxmox-multiuse directory on my zfs (called BigDrive here) which I made like this (thanks to some forum help: https://forum.proxmox.com/threads/add-backup-ability-to-main-storage-drive-newbie-question.145598/):
zp=BigDrive
myds=proxmox-multiuse
zfs create -o \
atime=off -o compression=lz4 -o recordsize=1024k \
$zp/$myds || exit 99
(Then: Datacenter -> Storage -> Add -> Directory)
This allows me to put .qcow2 .raw or .vmdk files into BigDrive/proxmox-multiuse (BigDrive is otherwise just zfs to me).
On determining that a box needed heavy enough lifting to require non-linear snapshot restoration I would run:
mkdir /BigDrive/proxmox-multiuse/images/100
qemu-img convert -O qcow2 -f raw /dev/zvol/BigDrive/vm-100-disk-0 /BigDrive/proxmox-multiuse/images/100/vm-100-disk-0.qcow2
qm disk rescan --vmid 100
I would probably use the GUI to then stop the VM, detach the original disk, attach the new one, check the boot order, and start the VM again, I guess this too would do it:
qm config 100 | grep size
(to determine the drive type and number)
qm stop 100
qm set 100 --scsi0 proxmox-multiuse:100/vm-100-disk-0.qcow2
qm start 100
Once I'm done doing my snapshots and rolling back indiscriminately, I would then backup my VM:
The VM -> Backup -> Backup Now (check "Protected") -> Backup
Then stop the VM and "Restore" the backup but with the original BigDrive (not proxmox-multiuse) selected for "Storage" and start the VM.
It's cumbersome and it's not perfect since I'm now starting with a box that's been rebooted twice since I started dealing with whatever the problem is (which in certain cases might itself be an issue), but that's best idea I have so far.
I just wonder if there's a better way?
Thank you.
RJD