Full clone feature is not supported for drive 'efidisk0' (500)?

n1nj4888

Well-Known Member
Jan 13, 2019
162
22
58
44
Hi There,

When I try to clone an existing VM from the (not-current/not-latest) snapshot, I get the following error (with the error seemingly randomly either related to scsi0 or efidisk0)? Is this expected or a bug?

1592360123990.png

1592372873812.png

The configuration of the source VM being cloned is as follows:

Code:
root@pve-host1:~# cat /etc/pve/qemu-server/111.conf 
agent: 1
bios: ovmf
boot: cdn
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local-zfs:vm-111-disk-1,size=1M
ide2: none,media=cdrom
memory: 8192
name: pve-vm-docker-1
net0: virtio=<MAC>,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
parent: Snapshot3
scsi0: local-zfs:vm-111-disk-0,discard=on,size=20G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=<ID>
sockets: 1
vga: virtio
vmgenid: <ID>

[Snapshot1]
#Initial Install, set timezone, apt update, install qemu-guest-agent, install nfs-common, create NFS mounts, update /etc/fstab, extend root volume to 100%.
agent: 1
bios: ovmf
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local-zfs:vm-111-disk-1,size=1M
ide2: none,media=cdrom
memory: 8192
name: pve-vm-docker-ceph-1
net0: virtio=<MAC>,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-zfs:vm-111-disk-0,discard=on,size=20G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=<ID>
snaptime: 1586835511
sockets: 1
vga: virtio
vmgenid: <ID>

[Snapshot2]
#Before reinstall to Ubuntu 20.04 LTS
agent: 1
bios: ovmf
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local-zfs:vm-111-disk-1,size=1M
ide2: none,media=cdrom
memory: 8192
name: pve-vm-docker-1
net0: virtio=<MAC>,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
parent: Snapshot1
scsi0: local-zfs:vm-111-disk-0,discard=on,size=20G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=<ID>
snaptime: 1592295156
sockets: 1
vga: virtio
vmgenid: <ID>

[Snapshot3]
#Ubuntu 20.04 LTS%3A Initial Install, set timezone, apt update, install qemu-guest-agent, install nfs-common, create NFS / gluster mounts, update /etc/hosts, update /etc/fstab, install gluster-client
agent: 1
bios: ovmf
boot: dcn
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local-zfs:vm-111-disk-1,size=1M
ide2: none,media=cdrom
memory: 8192
name: pve-vm-docker-1
net0: virtio=<MAC>,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
parent: Snapshot2
scsi0: local-zfs:vm-111-disk-0,discard=on,size=20G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=<ID>
snaptime: 1592298157
sockets: 1
vga: virtio
vmgenid: <ID>

Thanks!
 
you run latest version?

please post your:

> pveversion -v
 
Hi @tom ,

pveversion below... Thanks!

Code:
root@pve-host1:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-7
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
 
Looks like a bug with the efi disk on ZFS. Please file this one via https://bugzilla.proxmox.com - thanks!
Hi @tom

just to chime in im also seeing this same error in the most recent version of ProxMox.

Do we have a status update in this issue?

our output below:

""Cheers
G

Bash:
# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
see my comment on the bug for the background on why this is not trivial to implement..
Hey @fabian

Thanks for the comments and direction.

Maybe i'm missing something in my understanding of how this all layers together.

Im aware that QOC2 images use COW in a similar way to ZFS uses COW, please correct me if i'm off track.

With ZFS normal snapshots we have the ability to be cloned and replicated etc why is this different with KVM on ZFS?

Would you mind please filling in the blanks for me so i understand the issue in more depth.

""Cheers
G
 
the problem is that a full clone requires access to the volume's contents. for regular filesystem datasets (used by containers) we can just mount an individual snapshot and transfer the contents. for zvol datasets (used by VMs) we need to modify the dataset so that ALL snapshots are exposed as block devices, then clone, then modify the dataset again to undo the change we did. if the volume has a lot of snapshots, this is rather expensive (as adding block devices triggers all sorts of stuff in the system). if the clone crashes (or gets killed) half-way through, the change is now permanent and can cause all sorts of issues..
 
  • Like
Reactions: velocity08
Hey @fabian

that’s a much better explanation thanks cor taking the time to explain.

may I ask why this isn’t an issue when using standard lvm volume and file based qoc2 image format?

im able to clone a snapshot.

thanks in advance :)

“”Cheers
G
 
because for LVM thin the snapshots are exposed as block devices always, and for qcow2 they are accessed via qemu-img which allows directly accessing a specific snapshot
 
because for LVM thin the snapshots are exposed as block devices always, and for qcow2 they are accessed via qemu-img which allows directly accessing a specific snapshot
Thanks @fabian

that makes a lot of sense :)

When using Ceph is this using qcow2 disks or something different?
Does running VM on Ceph storage allow for clone from snap?

""Cheers
G
 
ceph uses its own snapshotting mechanism, which IIRC allows direct full clones from snapshots
 
  • Like
Reactions: velocity08
the problem is that a full clone requires access to the volume's contents. for regular filesystem datasets (used by containers) we can just mount an individual snapshot and transfer the contents. for zvol datasets (used by VMs) we need to modify the dataset so that ALL snapshots are exposed as block devices, then clone, then modify the dataset again to undo the change we did. if the volume has a lot of snapshots, this is rather expensive (as adding block devices triggers all sorts of stuff in the system). if the clone crashes (or gets killed) half-way through, the change is now permanent and can cause all sorts of issues..


We use exclusively zfs-storage for all virtual machines. There is no difficulty in doing this operation manually. It is possible to create complete clones or slopes associated with the image (to save space).

Suppose there is a snapshot snapshot-01

The linked clone is made by the usual zfs cloning of the snapshot:

Code:
zfs clone zfs-pool/local/proxmox/vm-99012-disk-0@snapshot-01 zfs-pool/local/proxmox/vm-99099-disk-0

In this situation, we will not be able to delete the snapshot as long as there is at least one clone of it. Nevertheless, it is very convenient when you need to create a large number of identical virtual machines, for example dozens or hundreds. Such clones take up space only for new data.

A completely independent clone can be created like this:


Code:
root@f099-hv00:~# zfs send -nP zfs-pool/local/proxmox/vm-99012-disk-0@snapshot-01
full    zfs-pool/local/proxmox/vm-99012-disk-0@snapshot-01      16728775512
size    16728775512

zfs send zfs-pool/local/proxmox/vm-99012-disk-0@snapshot-01 | pv --size 16728775512 | zfs receive zfs-pool/local/proxmox/vm-99099-disk-0

In this way, you can clone a snapshot not only within one zfs pool, but also to another zfs pool, and even to a zfs pool on another node.
We got a completely independent dataset with a snapshot in which the original state is stored. This snapshot can be deleted.

Code:
zfs destroy zfs-pool/local/proxmox/vm-99099-disk-0@snapshot-01

ZFS snapshots are an extremely convenient way to manage data. Proxmox integration with zfs is the best I've seen! The ability to clone virtual machines through the zfs snapshot mechanism would be very useful. It's a pity that it doesn't work now.

Dear @igluko and @Wladimi have written a wonderful bash script for cloning virtual machines using zfs
https://github.com/Wladimir-N/pve-zfs-clone
 
Last edited:
yeah, it's often possible to get more features by going directly to the storage layer - but these special features then do not work across different storage types. a full clone in PVE is pretty much storage agnostic (it uses qemu-img / qemu's block-mirror under the hood). a linked clone requires storage support, and is limited to within one storage (e.g. with ZFS the native 'clone' is used, qcow2 image files support using a backing file as base layer, etc.pp.).
 
A completely independent clone can be created like this:

Code:
root@f099-hv00:~# zfs send -nP zfs-pool/local/proxmox/vm-99012-disk-0@snapshot-01
full    zfs-pool/local/proxmox/vm-99012-disk-0@snapshot-01      16728775512
size    16728775512

zfs send zfs-pool/local/proxmox/vm-99012-disk-0@snapshot-01 | pv --size 16728775512 | zfs receive zfs-pool/local/proxmox/vm-99099-disk-0

In this way, you can clone a snapshot not only within one zfs pool, but also to another zfs pool, and even to a zfs pool on another node.
We got a completely independent dataset with a snapshot in which the original state is stored. This snapshot can be deleted.

Code:
zfs destroy zfs-pool/local/proxmox/vm-99099-disk-0@snapshot-01
Thank you very much, this is a great idea.

Unfortunately I'm not a zfs expert, and after cloning without errors:

Code:
# zfs send -nP rpool/data/vm-151-disk-0@SNAP_OK
full    rpool/data/vm-151-disk-0@SNAP_OK        8342019192
size    8342019192

# zfs send rpool/data/vm-151-disk-0@SNAP_OK | pv --size 8342019192 | zfs receive -F rpool/data/vm-180-disk-0@SNAP_OK
7.85GiB 0:00:21 [ 377MiB/s] [============================================================================================>] 101%          

# zfs list -t snapshot
NAME                                                  USED  AVAIL     REFER  MOUNTPOINT
...
...
rpool/data/vm-151-disk-0@SNAP_OK                     2.54G      -     5.14G  -
rpool/data/vm-180-disk-0@SNAP_OK                        0B      -     5.14G  -


I cannot see any snapshot on VM 180, so, what's next? Any clue? Thank you.

Image 001.png
 
Came across this today, was rather baffled by this as I had used LVM in the past. I guess the "workaround" would be to restore the snapshot first and clone as current. A little messy, but it does work in a pinch.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!