Cloning a VM from snapshot - not working

Ivan Dimitrov

Well-Known Member
Jul 14, 2016
64
7
48
40
Hello, I am getting the following error "Full clone feature is not available at /usr/share/perl5/PVE/API2/Qemu.pm line 2441. (500)" when trying to clone a VM by giving different snapshot than the current as bases for the new VM.
Cloning works as expected when I choose "current" as the original snapshot.
I am running the latest ProxMox 5 beta. Unfortunately I am not sure if this was working in 4.x or something that came with the 5.0 code.
media-20170414.jpg
 
what is the config of the vm 122? and how does the storage config look?
 
cloning from snapshot is not available on all storage

need to have snap=>1 in plugin
(maybe do you use zfs ?)

grep -r copy /usr/share/perl5/PVE/Storage/
/usr/share/perl5/PVE/Storage/RBDPlugin.pm: copy => { base => 1, current => 1, snap => 1},
/usr/share/perl5/PVE/Storage/ZFSPlugin.pm: copy => { base => 1, current => 1},
/usr/share/perl5/PVE/Storage/DRBDPlugin.pm: copy => { base => 1, current => 1},
/usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm: copy => { base => 1, current => 1},
/usr/share/perl5/PVE/Storage/ISCSIDirectPlugin.pm: copy => { current => 1},
/usr/share/perl5/PVE/Storage/LunCmd/Iet.pm:# 2) ssh-copy-id <ip_of_iscsi_storage>
/usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm:# 6. On one of the proxmox nodes login as root and run: ssh-copy-id ip_freebsd_host
/usr/share/perl5/PVE/Storage/SheepdogPlugin.pm: copy => { base => 1, current => 1, snap => 1},
/usr/share/perl5/PVE/Storage/Plugin.pm: copy => { base => {qcow2 => 1, raw => 1, vmdk => 1},
/usr/share/perl5/PVE/Storage/LvmThinPlugin.pm: copy => { base => 1, current => 1, snap => 1},
/usr/share/perl5/PVE/Storage/Custom/NetappPlugin.pm: copy => { base => 1, current => 1, snap => 1},
/usr/share/perl5/PVE/Storage/LVMPlugin.pm: copy => { base => 1, current => 1},
/usr/share/perl5/PVE/Storage/ISCSIPlugin.pm: copy => { current => 1},
 
Hi.
Why I can't create a VM from not current snapshot with ZFS?
Is this a ZFS problem? Because with LVM all is ok and I can Clone to new VM any snapshots.
Thx.
zfs_snapshot_problem.png
 
Last edited:
again, if i could see the vm and storage config, maybe i could tell where the problem is...
 
again, if i could see the vm and storage config, maybe i could tell where the problem is...
Hi.

Latest & updated Proxmox VE.


VM:
agent: 1
boot: cd
bootdisk: virtio0
cores: 4
ide2: none,media=cdrom
memory: 4096
name: deb
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
protection: 1
scsihw: virtio-scsi-pci
smbios1: uuid=xxxx
sockets: 1
startup: order=5,up=120,down=180
virtio0: local-zfs:vm-xxx-disk-1,size=120G
#qmdump#map:virtio0:drive-virtio0:local-zfs::

Storage:

#zfs get all rpool/data
NAME PROPERTY VALUE SOURCE
rpool/data type filesystem -
rpool/data creation Thu Aug 3 13:23 2017 -
rpool/data used 249G -
rpool/data available 1.46T -
rpool/data referenced 96K -
rpool/data compressratio 1.49x -
rpool/data mounted yes -
rpool/data quota none default
rpool/data reservation none default
rpool/data recordsize 128K default
rpool/data mountpoint /rpool/data default
rpool/data sharenfs off default
rpool/data checksum on default
rpool/data compression lz4 inherited from rpool
rpool/data atime off inherited from rpool
rpool/data devices on default
rpool/data exec on default
rpool/data setuid on default
rpool/data readonly off default
rpool/data zoned off default
rpool/data snapdir hidden default
rpool/data aclinherit restricted default
rpool/data createtxg 6 -
rpool/data canmount on default
rpool/data xattr on default
rpool/data copies 1 default
rpool/data version 5 -
rpool/data utf8only off -
rpool/data normalization none -
rpool/data casesensitivity sensitive -
rpool/data vscan off default
rpool/data nbmand off default
rpool/data sharesmb off default
rpool/data refquota none default
rpool/data refreservation none default
rpool/data guid 14737678990354715093 -
rpool/data primarycache all default
rpool/data secondarycache all default
rpool/data usedbysnapshots 0B -
rpool/data usedbydataset 96K -
rpool/data usedbychildren 249G -
rpool/data usedbyrefreservation 0B -
rpool/data logbias latency default
rpool/data dedup off default
rpool/data mlslabel none default
rpool/data sync standard inherited from rpool
rpool/data dnodesize legacy default
rpool/data refcompressratio 1.00x -
rpool/data written 96K -
rpool/data logicalused 367G -
rpool/data logicalreferenced 40K -
rpool/data volmode default default
rpool/data filesystem_limit none default
rpool/data snapshot_limit none default
rpool/data filesystem_count none default
rpool/data snapshot_count none default
rpool/data snapdev hidden default
rpool/data acltype off default
rpool/data context none default
rpool/data fscontext none default
rpool/data defcontext none default
rpool/data rootcontext none default
rpool/data relatime off default
rpool/data redundant_metadata all default
rpool/data overlay off default
 

Attachments

  • 2018-04-06 11_56_25.png
    2018-04-06 11_56_25.png
    6.5 KB · Views: 15
Last edited:
so does it work now? your vm config does not show any snapshots and no 'scsi0' drive which the error screenshot shows
 
Hi.
With scsi or virtio the same :(

Kernel Version Linux 4.13.16-2-pve #1 SMP PVE 4.13.16-47 (Mon, 9 Apr 2018 09:58:12 +0200)
PVE Manager Version pve-manager/5.1-49/1e427a54
 

Attachments

  • vm.png
    vm.png
    15.5 KB · Views: 26
  • vm2.png
    vm2.png
    8.9 KB · Views: 24
  • vm3.png
    vm3.png
    5.5 KB · Views: 24
this only works on zfs when the vm is converted to a template
because if you clone from a snapshot in zfs, you cannot delete the snapshot as long as the clone exists
 
:mad:
But WHY? Clone not a "soft" or "hard link". It's a new independent VM. Can you fix it ?
 
Last edited:
the general problem is that in order to clone a disk, we need to access it, and in the case of zfs zvols, you cannot directly access a snapshot as block device
we would have to activate the blockdevs for snapshots for a disk first and that is not implemented yet, but please make an enhancement request here: https://bugzilla.proxmox.com/
 
this is not so easy to implement as one would think, since we cannot expose a single snapshot as a block device (only for a whole disk, which we then have a problem to deactivate this again)
 
the general problem is that in order to clone a disk, we need to access it, and in the case of zfs zvols, you cannot directly access a snapshot as block device
we would have to activate the blockdevs for snapshots for a disk first and that is not implemented yet, but please make an enhancement request here: https://bugzilla.proxmox.com/
But you can simply make zfs clone on this snapshot and then copy data from this clone to new zvol (as you do when we cloning with "current" option). After that - destroy this zfs clone zvol.
Is this a problem?
Code:
root@pve:~# zfs clone rpool/vm-123-disk-1@autodaily180531050002 rpool/vm-123-disk-1-clone
root@pve:~# ls /dev/zvol/rpool/vm-123*
vm-123-disk-1
vm-123-disk-1-clone
root@pve:~# dd if=/dev/zvol/rpool/vm-123-disk-1-clone of=/dev/zvol/vm-124-disk-1 bs=4096

root@pve:~# zfs destroy rpool/vm-123-disk-1-clone
 
Please implement this. The lack of this feature makes ZFS storage very inconvenient to use.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!