[SOLVED] Storage configuration for Snapshots and RAW files.

ProxHex01

New Member
Jan 26, 2025
8
2
3
Hello,

Is it possible to use a different location to store RAW files and snapshots files?
I mean, I have two SSD: 2To and 1To.
I would like to use the 2To to store all my VM's (LVM, raw files) and use my 1To only for snapshots files and backup files.

Is it possible as my 2To use RAW file format and my 1To use QCOW2 file format?

Thanks,
 
Actually you can do snapshots only storing on same volume (lvm, zfs, btrfs) or file (qcow2, with internal snapshot) FWIK.
There is a work in progress for implement external snapshot support (see pve-devel ML) but limited (for example not supporting raw file as base) and I have not checked deep if support store external snapshot on different storage.
 
Actually you can do snapshots only storing on same volume (lvm, zfs, btrfs) or file (qcow2, with internal snapshot) FWIK.
There is a work in progress for implement external snapshot support (see pve-devel ML) but limited (for example not supporting raw file as base) and I have not checked deep if support store external snapshot on different storage.
So I must erase the 2To SSD and choose ZFS volume?
It's not a problem if I have to recreate all my VM's.

I wanted to use my biggest SSD (2To) to store VM files, my 1To to store snapshots and backups and my second 1To to store Proxmox (and ISO files).
 
Maybe I didn't explain well but as far as I know of those listed snapshots are only supported on the same volume so you couldn't create a volume on the 2tb disk for the vm disks but have the snapshots on another volume on another disk.
 
I would not say it's impossible but if 2to die your snapshots on 1to are useless too and furthermore if a pbs vm on 2to is gone also and your backup on 1to even not useable for restore. Carefully planning before you setup any will bring benefit in future of any hw (or sw failure like update broken or buggy) failure. Good luck.
 
So,

I transfert all my VM's to one of my SSD, format my second SSD and the transfert them again.
Now I can use the snapshot function.

Thank you guys for your help.
 
  • Like
Reactions: waltar
Just an example how to split snapshot storage destination from the primary pool with lvm:

To setup a snapback process, you need to have a local LV, with a snapshot, whose contents have been sent to a remote server, perhaps something like this:
lvcreate --snapshot -L10G -n somevm-snapback vmsrv1/somevm
dd if=/dev/vmsrv1/somevm-snapback bs=1M | pv -ptrb | \
ssh root@vmsrv2 dd of=/dev/vmsrv2/somevm

Now, you can run something like the following periodically (say, out of cron each hour):
lvcreate --snapshot -L10G -n somevm-snapback-new vmsrv1/somevm
lvmsync /dev/vmsrv1/somevm-snapback vmsrv2:/dev/vmsrv2/somevm --snapback \
/var/snapbacks/somevm.$(date +%Y%m%d-%H%M)
lvremove -f vmsrv1/somevm-snapback
lvrename vmsrv1/somevm-snapback-new somevm-snapback

But wait, there's more! lvmsync also has the ability to dump out the snapshot data to disk, rather than immediately applying it to another block device.
For example, if you just wanted to take a copy of the contents of a snapshot, you could do something like this:
lvmsync --stdout /dev/somevg/somelv-snapshot >~/somechanges

At a later date, if you wanted to apply those writes to a block device, you'd do it like this:
lvmsync --apply ~/somechanges /dev/somevg/someotherlv

You can also do things like do an lvmsync from the destination -- this is useful if (for example) you can SSH from the destination to the source machine, but not the other way around (fkkn firewalls, how do they work?). You could do this by running something like the following on the destination machine:
ssh srcmachine lvmsync --stdout /dev/srcvg/srclv-snap | lvmsync --apply - /dev/destvg/destlv
 
  • Like
Reactions: Fantu