On my old proxmox instance i didn't use zfs but the directory storage (.vmdk / .qcow2 files). I used VMs on which i excluded drives from backup. When i restored a VM, the disk was then marked as unused but still available to remount on that VM (after doing a qm rescan). I also had trouble with that so I asked for help here: https://forum.proxmox.com/threads/adding-existing-disk-from-storage-to-vm.108645/
On my new proxmox instance i use zfs with containers. So i no longer have virtual hard drives as file but as zfs dataset volumes. When i now exclude a disk/volume from backup and restore a backup, my excluded volume will be destroyed by proxmox. See https://forum.proxmox.com/threads/z...ore-after-restoing-backup.141165/#post-632176
Relying in the faith that proxmox' architecture / behaviour is the same for zfs volumes with containers and directory storage with VMs caused me 3 days of work and loss of data now. I miss the above described behaviour with containers on zfs.
So i request
On my new proxmox instance i use zfs with containers. So i no longer have virtual hard drives as file but as zfs dataset volumes. When i now exclude a disk/volume from backup and restore a backup, my excluded volume will be destroyed by proxmox. See https://forum.proxmox.com/threads/z...ore-after-restoing-backup.141165/#post-632176
Relying in the faith that proxmox' architecture / behaviour is the same for zfs volumes with containers and directory storage with VMs caused me 3 days of work and loss of data now. I miss the above described behaviour with containers on zfs.
So i request
- either a checkbox / yes-no-dialog "Remove unused drives?" / "Remove drives excluded from backup" on restore or
- never deleting unused drives but show them as unused.
Last edited: