clones

mikem12

New Member
Feb 24, 2022
20
1
3
34
Hi All,

I'm confused on how to create a full clone...

Does GUI-->Rt. Click VM --> Clone create a full clone or a linked clone?

The pop-up has no options to specify what kind of clone to create, it just seems to do what it does without asking, but when I compare the disk images from the source and destination VM's, they differ. ie.
# diff 10131/vm-10131-disk-0.qcow2 10130/vm-10130-disk-0.qcow2
Binary files vm-10131-disk-0.qcow2 and vm-10130-disk-0.qcow2 differ

If the resulting disk images differ, would that indicate a linked-clone? It seems like a full clone would have made an exact copy of the VM.

If GUI-->Rt. Click VM --> Clone creates a full clone, how do I create a linked-clone?

I'm using Proxmox 7.1-10

Thank you
 
  • Like
Reactions: MrPete
A linked clone can only be created from a template. I.e. you need to convert your VM to template first in PVE UI or CLI.
A clone from regular VM is always a full clone, so you have no option to select otherwise.

A full clone may differ from source if you are doing a clone of live system, there are always changes. If you started either source or destination, or stopped.
Both linked and full clone should be identical to source if nothing was disturbed in the middle. With qcow, I believe, an intermediary snapshot is taken to then clone from.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi,
with qcow2, you don't necessarily get a bit-wise copy, even if cloning offline. For example, the original could contain a snapshot, while the clone won't, but I'm not sure it's limited to that. What you really need to compare is the image that the qcow2 file represents (e.g. using qemu-img compare).
 
Hi,
with qcow2, you don't necessarily get a bit-wise copy, even if cloning offline. For example, the original could contain a snapshot, while the clone won't, but I'm not sure it's limited to that. What you really need to compare is the image that the qcow2 file represents (e.g. using qemu-img compare).

That's very helpful. Thank you!
 
SUGGESTION: Please document and highlight some of the above! It is NOT in the documentation, and as the OP mentioned, it is not said in the GUI.

GUI crucial confusing items missing:
* If copying from a non-template, call this "Full Clone"; if copying from template, call it "LInked Clone" (I assume full clone of template is impossible)
* Target:
- should SAY only targets with shared storage can be used
- should FILTER to only include such target possibilities
- if there's a target error, should SAY why target is not available.
(I wasted a chunk of time finding that info)

I'm having additional clone issues; will start another thread for that. ;)
 
HI,
SUGGESTION: Please document and highlight some of the above! It is NOT in the documentation, and as the OP mentioned, it is not said in the GUI.

GUI crucial confusing items missing:
* If copying from a non-template, call this "Full Clone"; if copying from template, call it "LInked Clone" (I assume full clone of template is impossible)
* Target:
- should SAY only targets with shared storage can be used
- should FILTER to only include such target possibilities
- if there's a target error, should SAY why target is not available.
(I wasted a chunk of time finding that info)

I'm having additional clone issues; will start another thread for that. ;)
it is part of the documentation. You can create a full clone from a template and you can select a different target storage while doing a full clone. What exactly is the error you got? Is the storage configuration correct?
 
I should have been clearer (particularly since I am lamenting lack of GUI clarity!) My apologies.

SOME is not in the documentation:
* Nothing explains that the GUI clone tool silently does linked vs full clone depending on the source. (This of course is best "documented" in the user interface ;) )
* The linked clone doc talks about templates, but doesn't actually say that a template is the required source... just that it needs to be read-only. To be helpful, I've rearranged the Linked clone paragraphs below as a suggestion.

When I wrote "Target", I was talking about Target NODE:
- This IS in the documentation, but is NOT described in the GUI
- GUI have a note that only target nodes with shared storage can be used
- The GUI Target Node field should filter to only include such target possibilities
- (Most likely this would eliminate "target node error" completely. Thus, no need to improve the error to explain why the target node is not available :)

I always prefer to make errors impossible, rather than having to explain why the user's entered info was wrong ;)

(Further thought: if there is no usable shared storage in a cluster, perhaps the target node option should be grayed out, and asterisked with a "no shared storage" note.)

Suggested rearrangement of Linked Clone doc:
Linked clones are images that refer to an original read-only Template.

Such a clone is a writable copy whose initial contents are the same as the original data. Creating a linked clone is nearly instantaneous, and initially consumes no additional space. Unmodified data blocks are read from the original image, but modifications are written (and afterwards read) from a new location. This technique is called Copy-on-write.

With Proxmox VE one can convert any VM into a read-only Template, which is the required source for Linked clones.
 
* Nothing explains that the GUI clone tool silently does linked vs full clone depending on the source. (This of course is best "documented" in the user interface ;) )
When cloning a template there is a dropdown that is set by default to "Linked Clone" with another choice being "Full Clone" - I wouldnt call it "silent".
* The linked clone doc talks about templates, but doesn't actually say that a template is the required source... just that it needs to be read-only. To be helpful, I've rearranged the Linked clone paragraphs below as a suggestion.
Are we looking at the same documentation? Click "?" in Clone wizard, it literally says what needs to be done to make volume read-only:
Code:
They are called linked because the new image still refers to the original. Unmodified data blocks are read from the original image, but modification are written (and afterwards read) from a new location. This technique is called Copy-on-write.

This requires that the original volume is read-only. With Proxmox VE one can convert any VM into a read-only Template). Such templates can later be used to create linked clones efficiently.

This technique is called Copy-on-write.
This would be an incorrect statement. The underlying technique is storage dependent. For example, Blockbridge uses allocate-on-write which is more efficient than copy-on-write. The documentation is purposefully kept generic to allow for storage capability expansion.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
When cloning a template there is a dropdown that is set by default to "Linked Clone" with another choice being "Full Clone" - I wouldnt call it "silent".
It's silent when using anything other than a Template as source ;)
Good find! (I don't use templates much, yet.) So, revised suggestion:
  • Please include the "Mode" field (Full Clone vs Linked Clone) for non-template sources, grayed out of course to only allow Full.

Are we looking at the same documentation? Click "?" in Clone wizard, it literally says what needs to be done to make volume read-only:
Not that simple or obvious in the current doc, which is why I rearranged it. (And see below-- I see your point about generalizing further -- the current doc in this section only talks about copy-on-write.)

Right now, one doesn't learn the simple reality that Linked Clones are based on read-only Templates, without plowing through and making sense of (and I quote here, actually leaving out even more detail):
  • Such a clone is a writable copy whose initial contents are the same as the original data....
  • They are called linked because the new image still refers to the original...
  • This requires that the original volume is read-only...
  • With Proxmox VE one can convert any VM into a read-only Template...
  • Such templates can later be used to create linked clones...
Even with all that, it says the requirement is a read-only volume, not a template. (Now perhaps ProxMox does support linked clones of all R/O volumes (CD-ROM, R/O permissions, etc) but let's not go there :-D

Seems a simple up-front sentence answers the most important question for most users:
  • Linked clones are images that refer to an original read-only Template.

@bbgeek17, I do see your point about the incorrect (or at least limiting) statement in the current doc here. My original suggestion copied what is already there. To make it more generic for expanded storage capabilities, what do you think of this alternative?

Such a clone is a writable copy whose initial contents are the same as the original data. Creating a linked clone is nearly instantaneous, and initially consumes no additional space. Unmodified data blocks are read from the original image, but modifications are written (and afterwards read) from a new location. (Some implementations of this technique are called Copy-on-write.)

With Proxmox VE one can convert any VM into a read-only Template, which is the required source for Linked clones.

Or perhaps "Copy-on-write" need not be mentioned at all, as the terminology may be vendor-specific?

(Two curiosity questions: you wrote that "Blockbridge uses allocate-on-write which is more efficient than copy-on-write"...
1) Is there a difference between allocate-on-write and what Proxmox calls Thin Provisioning? (https://pve.proxmox.com/wiki/Storage)
2) If a block is being modified, doesn't the original have to be read before writing the new? (i.e. copy-on-write) I'm not sure what the difference is.
 
(Two curiosity questions: you wrote that "Blockbridge uses allocate-on-write which is more efficient than copy-on-write"...
1) Is there a difference between allocate-on-write and what Proxmox calls Thin Provisioning? (https://pve.proxmox.com/wiki/Storage)
Thin provisioning is a technique of not reserving all the advertised space at creation. It can be implemented in many different ways. Blocks are indeed often allocated as needed.

That said in the context of this discussion "copy-on-write/CoW" and "Allocate-on-write" are terms for implementation of memory/storage handling for snapshots.
2) If a block is being modified, doesn't the original have to be read before writing the new? (i.e. copy-on-write) I'm not sure what the difference is.
The block has theoretically been already read by client so it can be worked on and modified before it is committed back to disk as write. The specific implementation refers to snapshots in particular (on which clones are most often based).
Again at 10000f/3048m :
- copy-on-write : read block1, write block1 elsewhere, write changed block1
- allocate-on-write: write changed block elsewhere

keeping metadata/bitmap mapping is about the same, depending on implementation.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Again at 10000f/3048m :
- copy-on-write : read block1, write block1 elsewhere, write changed block1

If that's how CoW works, I'm amazed.
- block1 is already correctly stored. No reason to write it elsewhere!
- I always assumed CoW simply stores the modified version (elsewhere), updating the metadata.

I've just done my homework. ZFS does NOT work the way you say it does, @bbgeek17. Some filesystems DO -- apparently a snapshot on LVM for example.
ZFS (and BTRFS) might be more accurately described as RoW - Redirect On Write.
  • The original block is left untouched
  • The new block is written to a new place
  • (Many pointers get updated as appropriate)
For snapshots in particular, literally the only difference is whether the original block is left allocated (snapshot), or marked available (normal write.) In either case, the original block is NOT touched.

I'm sorry, but I do not see ANY difference between "Allocate On Write" and what ZFS, BTRFS and others do.

Here's one reference (of many), from a lecture by Matt Ahrens, one of the original ZFS developers.
- ZFS is a copy-on-write filesystem:
- whenever writing data to disk: write to area of disk
not currently in use
 
Last edited:
Lets avoid coming up with new terminology. ZFS is a COW system with all the benefits that it provides. There is a copy involved by definition. In the scope of clone/snapshot - the reference page below may be more illustrative. As you read it, keep in mind the default recordsize is 128KB and its common for only small part of it to be modified, ie 4k.

https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write

Copy-on-write (COW)​

Copy-on-write (COW) is a data storage technique in which you make a copy of the data block that is going to be modified, rather than modify the data block directly. You then update your pointers to look at the new block location, rather than the old. You also free up the old block, so it can be available to the application. Thus, you don't use any more disk space than if you were to modify the original block. However, you do severely fragment the underlying data. But, the COW model of data storage opens up new features for our data that were previously either impossible or very difficult to implement.

The biggest feature of COW is taking snapshots of your data. Because we've made a copy of the block elsewhere on the filesystem, the old block still remains, even though it's been marked as free by the filesystem. With COW, the filesystem is working its way slowly to the end of the disk. It may take a long time before the old freed up blocks are rewritten to. If a snapshot has been taken, it's treated as a first class filesystem. If a block gets overwritten after it has been snapshotted, it gets copied to the snapshot filesystem. This is possible, because the snapshot is a copy of the hash tree at that exact moment. As such, snapshots are super cheap, and unless the snapshotted blocks are overwritten, they take up barely any space.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
That's almost exactly what I (and Ahrens) describe... and what you're describing as well :) ... but with a small rather important error (I suspect the author of ZFS knows a bit more about this than Aaron Toponce ;) )

Quoting Ahrens, who provides a mini code summary in the link I already gave:
dsl_dataset_block_kill(): (00:29:29)

- keeps track of the accounting
- figures out if we should free the block
- free it if no snapshots reference it
- we can't free it if a snapshot references it

So under snapshot conditions, the original block is immutable: it is never freed and never overwritten. And there's no "snapshot filesystem" (in the sense used in the quote -- not for block storage. Of course there's snapshot metadata structure.)
  • By definition, to modify a block it must be read first (one read - the first phrase you bolded)
  • In a snapshot, the modified block is written to a new place, and the old is retained (one write - the second phrase you bolded)
Due to your source's reference to a "snapshot filesystem," I suspect the author of the document you're quoting -- and perhaps your own company's source for misunderstanding, is actually thinking of the LVM filesystem. In LVM there is a snapshot filesystem. And LVM is far less efficient for snapshots.
 
Last edited:
Due to your source's reference to a "snapshot filesystem," I suspect the author of the document you're quoting -- and perhaps your own company's source for misunderstanding, is actually thinking of the LVM filesystem. In LVM there is a snapshot filesystem. And LVM is far less efficient for snapshots.
I suspect the author used "filesystem" in a general way for a wider audience, perhaps mistakenly and unfortunately. But none of us is faultless - you've made the same generalization in above sentence by calling LVM a filesystem.
 
I suspect the author used "filesystem" in a general way for a wider audience, perhaps mistakenly and unfortunately. But none of us is faultless - you've made the same generalization in above sentence by calling LVM a filesystem.
Yes, LVM is a Framework for Storage Management. ;)

Let's not lose track of the key question:
  • It's incorrect to claim that ZFS is less efficient, due to a wasteful extra write (whether overwriting the original data block as was originally suggested above, or to a "snapshot filesystem.") ZFS always preserves the original snapshot data block as immutable.
  • It is correct to claim that LVM is less efficient for that reason.
  • Allocate-on-write as you describe it, is exactly what ZFS does for snapshots. Here's a nice clean definition:
"[Compared to Copy-on-write] Allocate-on-write conversely “freezes” data that was previously written and writes changes to that data elsewhere on other disk. This eliminates the three step “read, rewrite and write” process associated with copy-on-write and reduces snapshots to just one step – a write."​
  • Blockbridge may well have lower snapshot latency than a stock system: the company has done a lot of very impressive work on understanding and documenting best practices for configuring a variety of storage systems! I love efficiency claims based on real world measures; let's not waste effort shooting down strawmen such as the idea that ZFS snapshots are inherently more wasteful.
  • (BTW, ZFS was specifically described as an Allocate-on-Write system at least as far back as 2014 ;) ... and others suggested it years earlier. )
PS: as for introducing confusing tech... I was at first confused by Allocate-on-write in this context, because it used to have meaning primarily in computer architecture memory caching, going back to at least the early 1990's :-D
PPS: Found an interesting article from 2011 discussing ways in which HPE 3PAR overcame limitations of true CoW to make it just as fast as AoW (partly by ensuring better parallelism...) ... I'm sure there will always be ways to further improve our architectures.
 
  • Like
Reactions: bbgeek17
A clone from a linked clone will inherit the base drive?
a clone, whether linked or full, by definition of being a clone inherits the parent/base disk context.
The difference between linked and full is how much space they occupy at creation time and, of course, the technology used to create the clone. Full clones are storage independent, while linked clone rely on backend storage functionality.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!