Cloning a VM from snapshot - not working

Discussion in 'Proxmox VE: Installation and configuration' started by Ivan Dimitrov, Apr 20, 2017.

  1. Ivan Dimitrov

    Ivan Dimitrov New Member

    Joined:
    Jul 14, 2016
    Messages:
    15
    Likes Received:
    1
    Hello, I am getting the following error "Full clone feature is not available at /usr/share/perl5/PVE/API2/Qemu.pm line 2441. (500)" when trying to clone a VM by giving different snapshot than the current as bases for the new VM.
    Cloning works as expected when I choose "current" as the original snapshot.
    I am running the latest ProxMox 5 beta. Unfortunately I am not sure if this was working in 4.x or something that came with the 5.0 code.
    media-20170414.jpg
     
  2. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    3,482
    Likes Received:
    317
    what is the config of the vm 122? and how does the storage config look?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. spirit

    spirit Well-Known Member

    Joined:
    Apr 2, 2010
    Messages:
    3,302
    Likes Received:
    131
    cloning from snapshot is not available on all storage

    need to have snap=>1 in plugin
    (maybe do you use zfs ?)

    grep -r copy /usr/share/perl5/PVE/Storage/
    /usr/share/perl5/PVE/Storage/RBDPlugin.pm: copy => { base => 1, current => 1, snap => 1},
    /usr/share/perl5/PVE/Storage/ZFSPlugin.pm: copy => { base => 1, current => 1},
    /usr/share/perl5/PVE/Storage/DRBDPlugin.pm: copy => { base => 1, current => 1},
    /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm: copy => { base => 1, current => 1},
    /usr/share/perl5/PVE/Storage/ISCSIDirectPlugin.pm: copy => { current => 1},
    /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm:# 2) ssh-copy-id <ip_of_iscsi_storage>
    /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm:# 6. On one of the proxmox nodes login as root and run: ssh-copy-id ip_freebsd_host
    /usr/share/perl5/PVE/Storage/SheepdogPlugin.pm: copy => { base => 1, current => 1, snap => 1},
    /usr/share/perl5/PVE/Storage/Plugin.pm: copy => { base => {qcow2 => 1, raw => 1, vmdk => 1},
    /usr/share/perl5/PVE/Storage/LvmThinPlugin.pm: copy => { base => 1, current => 1, snap => 1},
    /usr/share/perl5/PVE/Storage/Custom/NetappPlugin.pm: copy => { base => 1, current => 1, snap => 1},
    /usr/share/perl5/PVE/Storage/LVMPlugin.pm: copy => { base => 1, current => 1},
    /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm: copy => { current => 1},
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. werter

    werter Member

    Joined:
    Dec 10, 2017
    Messages:
    37
    Likes Received:
    6
    Hi.
    Why I can't create a VM from not current snapshot with ZFS?
    Is this a ZFS problem? Because with LVM all is ok and I can Clone to new VM any snapshots.
    Thx.
    zfs_snapshot_problem.png
     
    #4 werter, Apr 5, 2018
    Last edited: Apr 5, 2018
  5. Ivan Dimitrov

    Ivan Dimitrov New Member

    Joined:
    Jul 14, 2016
    Messages:
    15
    Likes Received:
    1
    Hi, I think I saw it in different thread and didn't update this one. I think the feature is not supported with ZFS storage.
     
  6. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    3,482
    Likes Received:
    317
    again, if i could see the vm and storage config, maybe i could tell where the problem is...
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. werter

    werter Member

    Joined:
    Dec 10, 2017
    Messages:
    37
    Likes Received:
    6
    Hi.

    Latest & updated Proxmox VE.


    VM:
    agent: 1
    boot: cd
    bootdisk: virtio0
    cores: 4
    ide2: none,media=cdrom
    memory: 4096
    name: deb
    net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr0
    numa: 0
    onboot: 1
    ostype: l26
    protection: 1
    scsihw: virtio-scsi-pci
    smbios1: uuid=xxxx
    sockets: 1
    startup: order=5,up=120,down=180
    virtio0: local-zfs:vm-xxx-disk-1,size=120G
    #qmdump#map:virtio0:drive-virtio0:local-zfs::

    Storage:

    #zfs get all rpool/data
    NAME PROPERTY VALUE SOURCE
    rpool/data type filesystem -
    rpool/data creation Thu Aug 3 13:23 2017 -
    rpool/data used 249G -
    rpool/data available 1.46T -
    rpool/data referenced 96K -
    rpool/data compressratio 1.49x -
    rpool/data mounted yes -
    rpool/data quota none default
    rpool/data reservation none default
    rpool/data recordsize 128K default
    rpool/data mountpoint /rpool/data default
    rpool/data sharenfs off default
    rpool/data checksum on default
    rpool/data compression lz4 inherited from rpool
    rpool/data atime off inherited from rpool
    rpool/data devices on default
    rpool/data exec on default
    rpool/data setuid on default
    rpool/data readonly off default
    rpool/data zoned off default
    rpool/data snapdir hidden default
    rpool/data aclinherit restricted default
    rpool/data createtxg 6 -
    rpool/data canmount on default
    rpool/data xattr on default
    rpool/data copies 1 default
    rpool/data version 5 -
    rpool/data utf8only off -
    rpool/data normalization none -
    rpool/data casesensitivity sensitive -
    rpool/data vscan off default
    rpool/data nbmand off default
    rpool/data sharesmb off default
    rpool/data refquota none default
    rpool/data refreservation none default
    rpool/data guid 14737678990354715093 -
    rpool/data primarycache all default
    rpool/data secondarycache all default
    rpool/data usedbysnapshots 0B -
    rpool/data usedbydataset 96K -
    rpool/data usedbychildren 249G -
    rpool/data usedbyrefreservation 0B -
    rpool/data logbias latency default
    rpool/data dedup off default
    rpool/data mlslabel none default
    rpool/data sync standard inherited from rpool
    rpool/data dnodesize legacy default
    rpool/data refcompressratio 1.00x -
    rpool/data written 96K -
    rpool/data logicalused 367G -
    rpool/data logicalreferenced 40K -
    rpool/data volmode default default
    rpool/data filesystem_limit none default
    rpool/data snapshot_limit none default
    rpool/data filesystem_count none default
    rpool/data snapshot_count none default
    rpool/data snapdev hidden default
    rpool/data acltype off default
    rpool/data context none default
    rpool/data fscontext none default
    rpool/data defcontext none default
    rpool/data rootcontext none default
    rpool/data relatime off default
    rpool/data redundant_metadata all default
    rpool/data overlay off default
     

    Attached Files:

    #7 werter, Apr 6, 2018
    Last edited: Apr 6, 2018
  8. werter

    werter Member

    Joined:
    Dec 10, 2017
    Messages:
    37
    Likes Received:
    6
    Hi.
    Guys, can anyone confirm it? It's a bug or it's a "feature" :(?
     
  9. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    3,482
    Likes Received:
    317
    so does it work now? your vm config does not show any snapshots and no 'scsi0' drive which the error screenshot shows
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. werter

    werter Member

    Joined:
    Dec 10, 2017
    Messages:
    37
    Likes Received:
    6
    Hi.
    With scsi or virtio the same :(

    Kernel Version Linux 4.13.16-2-pve #1 SMP PVE 4.13.16-47 (Mon, 9 Apr 2018 09:58:12 +0200)
    PVE Manager Version pve-manager/5.1-49/1e427a54
     

    Attached Files:

    • vm.png
      vm.png
      File size:
      15.5 KB
      Views:
      9
    • vm2.png
      vm2.png
      File size:
      8.9 KB
      Views:
      8
    • vm3.png
      vm3.png
      File size:
      5.5 KB
      Views:
      8
  11. werter

    werter Member

    Joined:
    Dec 10, 2017
    Messages:
    37
    Likes Received:
    6
    Hi.
    Anyone can check it ?
     
  12. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    3,482
    Likes Received:
    317
    this only works on zfs when the vm is converted to a template
    because if you clone from a snapshot in zfs, you cannot delete the snapshot as long as the clone exists
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  13. werter

    werter Member

    Joined:
    Dec 10, 2017
    Messages:
    37
    Likes Received:
    6
    :mad:
    But WHY? Clone not a "soft" or "hard link". It's a new independent VM. Can you fix it ?
     
    #13 werter, Apr 19, 2018
    Last edited: Apr 19, 2018
  14. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    3,482
    Likes Received:
    317
    the general problem is that in order to clone a disk, we need to access it, and in the case of zfs zvols, you cannot directly access a snapshot as block device
    we would have to activate the blockdevs for snapshots for a disk first and that is not implemented yet, but please make an enhancement request here: https://bugzilla.proxmox.com/
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. werter

    werter Member

    Joined:
    Dec 10, 2017
    Messages:
    37
    Likes Received:
    6
  16. werter

    werter Member

    Joined:
    Dec 10, 2017
    Messages:
    37
    Likes Received:
    6
    Hi.
    Guys, can you answer ? Because It's very desired function.
     
  17. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    3,482
    Likes Received:
    317
    this is not so easy to implement as one would think, since we cannot expose a single snapshot as a block device (only for a whole disk, which we then have a problem to deactivate this again)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. Davyd

    Davyd New Member

    Joined:
    Apr 8, 2016
    Messages:
    20
    Likes Received:
    2
    But you can simply make zfs clone on this snapshot and then copy data from this clone to new zvol (as you do when we cloning with "current" option). After that - destroy this zfs clone zvol.
    Is this a problem?
    Code:
    root@pve:~# zfs clone rpool/vm-123-disk-1@autodaily180531050002 rpool/vm-123-disk-1-clone
    root@pve:~# ls /dev/zvol/rpool/vm-123*
    vm-123-disk-1
    vm-123-disk-1-clone
    root@pve:~# dd if=/dev/zvol/rpool/vm-123-disk-1-clone of=/dev/zvol/vm-124-disk-1 bs=4096
    
    root@pve:~# zfs destroy rpool/vm-123-disk-1-clone
    
     
    ABaum and kirillkh like this.
  19. ABaum

    ABaum New Member
    Proxmox Subscriber

    Joined:
    Nov 2, 2018
    Messages:
    3
    Likes Received:
    0
    Please bump this.
    It would really help when recovering from a failed machine.
     
  20. kirillkh

    kirillkh New Member

    Joined:
    Dec 17, 2018
    Messages:
    1
    Likes Received:
    0
    Please implement this. The lack of this feature makes ZFS storage very inconvenient to use.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice