[SOLVED] Restore snapshot on other VM

Lucas Rey

Well-Known Member
May 18, 2018
40
26
58
51
Good morning, is it possible to restore snapshot taken from a VM to other VM?
This is my situation:

Code:
zfs list -t snapshot
NAME                                                  USED  AVAIL     REFER  MOUNTPOINT
rpool/data/vm-170-disk-0@SNAP_OK         1.64G      -     19.6G  -
rpool/data/vm-170-disk-0@SNAP_PRE_UPGR   1.19G      -     19.6G  -
rpool/data/vm-170-disk-0@SNAP_POST_UPGR  34.5M      -     20.1G  -

I would like to restore one of these old VM 170 snap on VM 180, but I got error:

Code:
# qm rollback 180 rpool/data/vm-170-disk-0@SNAP_PRE_UPGR
400 Parameter verification failed.
snapname: invalid format - invalid configuration ID rpool/data/vm-170-disk-0@SNAP_PRE_UPGR'

Basically, I did a snapshot before upgrading system (rpool/data/vm-170-disk-0@SNAP_PRE_UPGR) and another one (the current one) where the system is already upgraded (rpool/data/vm-170-disk-0@SNAP_POST_UPGR)

I need to temporary access a previous snapshot to take some data, but keep the system in current status. So my idea was restore snapshot on other temporary VM, access data, and then delete temp VM. Then continue to work with upgraded VM as usual.

Is it something feasible? Is there any other way?

Thank you
Lucas
 
Hi,
you should be able to clone from the snapshot. Select your guest > More > Clone in the UI or use the --snapshot option for qm clone.
 
Hi,
you should be able to clone from the snapshot. Select your guest > More > Clone in the UI or use the --snapshot option for qm clone.
Hi Fabian, thank you for reply. Unfortunately when I tried to clone from old snapshot, other than "current", I got:
Full clone feature is not supported for drive 'scsi0' (500)



# cat /etc/pve/qemu-server/170.conf
boot: order=scsi0
cores: 2
memory: 1024
name: AdG
net0: virtio=FE:27:13:B4:28:B9,bridge=vmbr5
numa: 0
onboot: 1
ostype: l26
parent: SNAP_POST_UPGR
scsi0: local-zfs:vm-170-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=14255f2d-29e9-4bed-9879-f70f2d0395cc
sockets: 1
startup: order=1
vmgenid: 12efa4a0-6497-490f-8a35-67b251d92de0



# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-4-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-7
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-4
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-3-pve: 5.13.19-7
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.11.22-2-pve: 5.11.22-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-6
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1
 
Last edited:
Yes, unfortunately this is not implemented in our storage layer for ZFS yet. You can try to copy it yourself, creating a dummy VM (I'm using ID 180 below, like you did) without disks and then using
Code:
zfs send -p rpool/data/vm-170-disk-0@SNAP_PRE_UPGR | zfs receive rpool/data/vm-180-disk-0
qm set 180 -scsi0 local-zfs:vm-180-disk-0
 
  • Like
Reactions: Lucas Rey
Yes, unfortunately this is not implemented in our storage layer for ZFS yet. You can try to copy it yourself, creating a dummy VM (I'm using ID 180 below, like you did) without disks and then using
Code:
zfs send -p rpool/data/vm-170-disk-0@SNAP_PRE_UPGR | zfs receive rpool/data/vm-180-disk-0
qm set 180 -scsi0 local-zfs:vm-180-disk-0
Thank you very much, it seems is working fine!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!