Consolidate snapshot?

christophe

Renowned Member
Mar 31, 2011
191
5
83
Hi all,

We tried snapshots on a Win 2008r2 VM, in order to try to convert disks from ide to virtio before to do it in "real life".

Snapshots worked fine.

Disk conversion from ide to virtio went also OK.

2 questions :
- we tried to remove older snaphots. Fail, because old ide disks are not found! I know why, they are now virtio! But I don't need those old snapshots anymore. Any constructive help?
- Now that all my disks are virtio, I would like to consolidate actual state of this VM : it becomes the new and only reference, without any need of snapshot, nor rollback. It doesn't seem to be possible?


Thanks,

Christophe.
 
- we tried to remove older snaphots. Fail, because old ide disks are not found! I know why, they are now virtio! But I don't need those old snapshots anymore. Any constructive help?

You can try to manually cleanup the configuration file (simply edit the file).

- Now that all my disks are virtio, I would like to consolidate actual state of this VM : it becomes the new and only reference, without any need of snapshot, nor rollback. It doesn't seem to be possible?

You can create a full clone (copy) of the VM.
 
Hello Guys.

Time to dig out an old thread... :cool:

What I do not understand is how snapshots are consolidated if I manually delete them from the config file...

I'm coming from Hyper-V and there is the oldest disk the "mother" of all. So if I delete any differencing disk between the first
and the actual disk ("NOW") the whole machine is dead (maybe you can get to the oldest state with more manipulation of the
config file afterwards).

That means while the VM is running it writes to all intermediate differencing disks and if I delete one snaphot its content is written
to the parent "snapshot".

How do snapshots work in PVE?

Thanks in advance for clarification,

itiser
 
Hi,
What I do not understand is how snapshots are consolidated if I manually delete them from the config file...
it won't consolidate. That was the suggestion for a situation where the snapshot on a disk does not exist anymore, but the snapshot still exists in the configuration. You can also use qm delsnapshot <ID> <name> --force.

I'm coming from Hyper-V and there is the oldest disk the "mother" of all. So if I delete any differencing disk between the first
and the actual disk ("NOW") the whole machine is dead (maybe you can get to the oldest state with more manipulation of the
config file afterwards).

That means while the VM is running it writes to all intermediate differencing disks and if I delete one snaphot its content is written
to the parent "snapshot".

How do snapshots work in PVE?
It depends on the underlying storage. With ZFS, you have the limitation that you can only rollback to the most recent snapshot. With RBD, LVM-thin or qcow2 you don't have that limitation. In all cases you can remove snapshots just fine without breaking other snapshots.
 
I got the same problem as the tread-starter:

I changed the disk from ide to virtio (with snapshots taken before) und after the conversion I'm unable to delete the snapshots in the gui ("disk ide0
not found" or something like that).

How can I check if there are any snapshots on disk (ceph cluster)?

Is it safe in my case to delete the snapshots by manipulating the config file?
 
I got the same problem as the tread-starter:

I changed the disk from ide to virtio (with snapshots taken before) und after the conversion I'm unable to delete the snapshots in the gui ("disk ide0
not found" or something like that).
Was the VM running when you did this? What if you try to remove the snapshot while the VM is shut down. Please post the full error message and VM configuration qm config <ID>.
How can I check if there are any snapshots on disk (ceph cluster)?
E.g. rbd -p <pool> ls -l.
Is it safe in my case to delete the snapshots by manipulating the config file?
If the snapshot really doesn't exist on the RBD volume anymore, use qm delsnapshot <ID> <snapshot name> --force.
 
The output of "rbd -p <pool> ls -l" shows all disks of the cluster, right?
That means the output is (should be) the same on all cluster-nodes?!
 
The output of "rbd -p <pool> ls -l" shows all disks of the cluster, right?
That means the output is (should be) the same on all cluster-nodes?!
Yes. It just depends on the RBD pool.
 
Just to be sure (and have someone else to blame ;-) )

This is one of the VM's with snapshots I tried to remove by GUI - which ends in an error (and a locked snapshot):

I don't remember (maybe it wasn't me?) but as you can see there are changes made to the harddisk (size). Is this the
cause for the error?

1703074053269.png

1703074606586.png

Here's the corresponding output of "rbd -p ceph ls -l":

1703074101507.png


Can I simply delete the 3 files (and the lines in the configuration of the vm)?
 

Attachments

  • 1703074598316.png
    1703074598316.png
    2.8 KB · Views: 6
I changed the disk from ide to virtio (with snapshots taken before) und after the conversion I'm unable to delete the snapshots in the gui ("disk ide0
not found" or something like that).
Thank you for the report! Turns out that this is a bug that can affect RBD when using krbd=0 and qcow2 images, because there snapshots are removed via QEMU rather than directly via the storage layer.

An initial patch has been sent for discussion: https://lists.proxmox.com/pipermail/pve-devel/2024-January/061205.html

Here's the corresponding output of "rbd -p ceph ls -l":

View attachment 60126


Can I simply delete the 3 files (and the lines in the configuration of the vm)?
I'd only do that as a last resort. It should be possible to remove the snapshots after either:
  • Shutting down the VM.
  • Or: Turning on krbd in the storage configuration and migrating the VM (of course you can turn krbd back off if there is a specific reason you don't want to use it).
 
  • tting down the VM.
  • Or: Turning on krbd in the sto
Shutting down the vm, does the trick! Thank you...

But I saw krbd isn't active on our ceph storage. Can it be activated without risk?
 
But I saw krbd isn't active on our ceph storage. Can it be activated without risk?
I'm not aware of any issues with krbd. Some people report it to be faster for certain workloads, but AFAIK, in general there is no huge performance difference. As always, best to test it a bit first.

The patch was applied and is likely going to be rolled out in the coming weeks with qemu-server >= 8.0.11.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!