Hi,
This is an idea that makes a lot of sense to me, yet I'm not able to find people talking about it so it's probably a bad idea. Still I would love to know why it's bad. Before anyone asks, no I'm not planing on using this in production.
According to the docs PVE supports shared LVM storage when connected with iSCSI or FC which is great. But there is no snapshot support on it, and this makes sense since clustered LVM does not support snapshots, so far so good but... What if we can use an LV as a file in qcow2 format and let the format handle the snapshots?. AFIAK XenServer/XCP-ng does something similar with the vhd format.
So I decided to do some testing in the lab with a single PVE node and a local (not shared) LVM. And to my surprise I was able to do the following
To manage the snapshots I used
This is an idea that makes a lot of sense to me, yet I'm not able to find people talking about it so it's probably a bad idea. Still I would love to know why it's bad. Before anyone asks, no I'm not planing on using this in production.
According to the docs PVE supports shared LVM storage when connected with iSCSI or FC which is great. But there is no snapshot support on it, and this makes sense since clustered LVM does not support snapshots, so far so good but... What if we can use an LV as a file in qcow2 format and let the format handle the snapshots?. AFIAK XenServer/XCP-ng does something similar with the vhd format.
So I decided to do some testing in the lab with a single PVE node and a local (not shared) LVM. And to my surprise I was able to do the following
- Boot the VM with a manually modified lv+qcow2
- Partition the disk into 2 partitions
- mkfs.ext4 each partition
- Write data to each with multiple files and directories
- Take a snapshot
- Add new files and dirs
- Change existing file content
- Revert to the snapshot previously taken
- Create a VM with a disk in the LVM storage without starting it
- Enable the LV if needed with lvchange -ay /dev/vg1/vm-100-disk-0
- Format the LV with qemu-img create -f qcow2 /dev/vd1/vm-100-disk-0 32G
- Use qm show 100 to see the command that proxmox would use to start the VM
- Change the value from format=raw to format=qcow2 in the -drive parameter
- Manually start the VM with the modified command
To manage the snapshots I used
- Create snapshot qemu-img snapshot -c snap1 /dev/vd1/vm-100-disk-0
- List the snapshots with qemu-img snapshot -l /dev/vd1/vm-100-disk-0
- Revert snapshot qemu-img snapshot -a snap1 /dev/vd1/vm-100-disk-0
- Delete a snapshot qemu-img snapshot -d snap1 /dev/vd1/vm-100-disk-0