Merge storage from a Linked VM back to Base

jalict

New Member
Dec 2, 2022
17
1
3
Hi,

Slightly convuluted case here, and hopefully I can explain it in a way that makes sense :)

Working on a build system and we have some pretty hefty Perforce checkouts in a +150GB range. So, I do not want to want to do a fresh checkout for every build, just wanna get latest from what I had since previous build.

I am exploring different combinations of creating Templates that contains our Perforce checkouts, that will be used for multiple build targets (different vm for parallised building).

In order to cache what I have just synced from perforce: After the initial checkout, I wanna do more Perforce pulls/syncs to get latest, and then add them back to the base template.

I discovered that
Code:
qemu-img commit -B <base> <latest>
(link), which sounds like it is similar to Merge-VHD from HyperV? This is what we are using in our current flow.

My main questions is how do I reference base (file?) and latest (file?), where can I expect them to be stored?
And will I meet any other challanges here? Can I just merge files together, delete the Linked VM, and then the Template will just have the new data without issues?

If there is any other solution to this, let me know!

And just for context. We are using a ZFS over iSCSI. Proxmox 7.3-4. And I do not intend to create Linked Templates between every build :p
 
Last edited:
Hi,
this is not supported in Proxmox VE (without doing lots of manual things). Templates are intended to be static and expected to have linked clones. And with multiple linked clones, merging back to the base image is not possible without making other linked clones independent from the base image first.

To get to an updated template, the intended way is to make a full clone (either of the base or another linked clone), update that and convert that to a template to use from now on. But please note that the original template can only be deleted after all its linked clones are gone.
 
Hi,
this is not supported in Proxmox VE (without doing lots of manual things). Templates are intended to be static and expected to have linked clones. And with multiple linked clones, merging back to the base image is not possible without making other linked clones independent from the base image first.

To get to an updated template, the intended way is to make a full clone (either of the base or another linked clone), update that and convert that to a template to use from now on. But please note that the original template can only be deleted after all its linked clones are gone.
Very unfortunant. In our build flow we are creating ephemeral VMs that will be building different build targets based on a shared perforce checkout. So, when updating the checkout template, we expect there to be no linked vm's.

I can see us making Linked Clones, upon Linked Clones for fast checkout outs and fast cloning to start up the builds. But this will of course be quite messy on the front-end side.. And we will probably do cleaning here and there to keep things neat.
 
Last edited:
After much exporing it looks like the behaviour of zfs promote looks very much like what I want, like explained here in the section "Replacing a ZFS File System With a ZFS Clone".

But promoting the clone and renaming it to the base, does not seem to work. Promox/Qemu still references the old disk somewhere. How can I do this maneuver?

On similiar topic, I am not able to do zfs destroy <dataset> without getting dataset is busy. Does dataset busy also apply for disk that are only mounted to templates? What do I have to stop for them not to be busy so I can manipulate the zfs datasets?
 
The easy way is without manually messing around is:
To get to an updated template, the intended way is to make a full clone (either of the base or another linked clone), update that and convert that to a template to use from now on. But please note that the original template can only be deleted after all its linked clones are gone.

But promoting the clone and renaming it to the base, does not seem to work. Promox/Qemu still references the old disk somewhere. How can I do this maneuver?
What exactly did you do and what was the error?

On similiar topic, I am not able to do zfs destroy <dataset> without getting dataset is busy. Does dataset busy also apply for disk that are only mounted to templates? What do I have to stop for them not to be busy so I can manipulate the zfs datasets?
Was the promote operation finished at that time? You also need to create a __base__ snapshot for the promoted/renamed dataset, because Proxmox VE expects a ZFS base volume to have that. Since a template cannot be started, the disk cannot be busy because of the template.
 
The easy way is without manually messing around is:
I understand this is the intended way, but I cannot do a full clone as it would introduce a full 150GB full copies for every build target we have (+6) pr. build we do, which is gonna take _way_ to long.

What exactly did you do and what was the error?
Here are the steps:
1) Assume we have a full OS that I convered to a template (vmid 100) with a base-disk-1
2) Linked Clone the template (vmid 101)
3) I go into the clone, add a file inside vm-disk-1 and shutdown the clone
Now at this point, I wanna take the changes and commit them to basevm - one way or another :)
And as all the VMs are not spinning, I assume this should not be a problem (Albeit, I understand it is a manual process), this is always our assumption when doing this.


Here is what I am trying
4) I execute zfs promote vm-disk-1 - after this I see the filesize of base-disk-1 is now 0, which sounds correct. And that vm-disk-1 has increased to what I assume is base-disk-1 + vm-disk-1.
5) I execute zfs rename base-disk-1 base-disk-legacy - to move the old disk to the side
6) I execute zfs rename vm-disk-1 base-disk-1 - move the disk from the clone to template place.

At this point I can still make Linked Clones of the template, but it does not contain the file that I added. It still uses the data from the diskset that I moved to base-disk-legacy, and not the data from the promoted image, which Is quite surprising to me.

Was the promote operation finished at that time? You also need to create a __base__ snapshot for the promoted/renamed dataset, because Proxmox VE expects a ZFS base volume to have that. Since a template cannot be started, the disk cannot be busy because of the template.
Does promoting take long time? The command exited almost immediately. But the __base__ part is good to know! I did see this in the zfs list and did wonder what it was.

Just wanna take the time to say that I appreciate the support so far!
 
Last edited:
I understand this is the intended way, but I cannot do a full clone as it would introduce a full 150GB full copies for every build target we have (+14) pr. build we do.
You don't need to do a full clone for every build target. For the ephemeral VMs, you can still use linked clones all you like. You just need a full clone when you want to create an upgraded base template. But you can't magically have the linked clones use the new template.

Here are the steps:
1) Assume we have a full OS that I convered to a template (vmid 100) with a base-disk-1
2) Linked Clone the template (vmid 101)
3) I go into the clone, add a file inside vm-disk-1 and shutdown the clone
Now at this point, I wanna take the changes and commit them to basevm - one way or another :)
And as all the VMs are not spinning, I assume this should not be a problem (Albeit, I understand it is a manual process), this is always our assumption when doing this.

Here is what I am trying
4) I execute zfs promote vm-disk-1 - after this I see the filesize of base-disk-1 is now 0, which sounds correct. And that vm-disk-1 has increased to what I assume is base-disk-1 + vm-disk-1.
How do you determine the file size? base-disk-1 should be a zvol (i.e. a virtual block device), not a file.

5) I execute zfs rename base-disk-1 base-disk-legacy - to move the old disk to the side
6) I execute zfs rename vm-disk-1 base-disk-1 - move the disk from the clone to template place.

From now on I expect that the template clones will have the promoted data, but it still seems to use the data from base-disk-legacy.
No, I'm pretty sure that there is no mechanism to achieve what you would need in ZFS. The clones won't magically get updated here.

From man zfs-promote
Code:
     The zfs promote command makes it possible to destroy the dataset that the clone was created from.  The clone parent-child dependency relationship is reversed, so that the origin dataset becomes a
     clone of the specified dataset.
That's all it does ;)

For example
Code:
root@pve701 ~ # zfs create myzpool/base -V 1G       
root@pve701 ~ # zfs snapshot myzpool/base@__base__            
root@pve701 ~ # zfs clone myzpool/base@__base__ myzpool/linked1
root@pve701 ~ # zfs clone myzpool/base@__base__ myzpool/linked2
root@pve701 ~ # zfs get name,origin myzpool/base myzpool/linked1 myzpool/linked2
NAME             PROPERTY  VALUE                  SOURCE
myzpool/base     name      myzpool/base           -
myzpool/base     origin    -                      -
myzpool/linked1  name      myzpool/linked1        -
myzpool/linked1  origin    myzpool/base@__base__  -
myzpool/linked2  name      myzpool/linked2        -
myzpool/linked2  origin    myzpool/base@__base__  -
root@pve701 ~ # zfs get origin myzpool/base myzpool/linked1 myzpool/linked2 
NAME             PROPERTY  VALUE                  SOURCE
myzpool/base     origin    -                      -
myzpool/linked1  origin    myzpool/base@__base__  -
myzpool/linked2  origin    myzpool/base@__base__  -
root@pve701 ~ # zfs promote myzpool/linked1
root@pve701 ~ # zfs get origin myzpool/base myzpool/linked1 myzpool/linked2
NAME             PROPERTY  VALUE                     SOURCE
myzpool/base     origin    myzpool/linked1@__base__  -
myzpool/linked1  origin    -                         -
myzpool/linked2  origin    myzpool/linked1@__base__  -
Note that the __base__ snapshot that linked1 now has is still the original one!
What you would need is a way to replace the snapshot, but that is not something that you can do in ZFS AFAIK. How would you even keep track of the changes, make sure things remain consistent in the clone if the underlying snapshot could be replaced at any time?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!