Hi,
I am not super familiar with how this is implemented and meant to work, and I can't find any hints when search forum, so I wanted to briefly ask if anyone can comment.
This is on a proxmox latest 8.0.4 host
VM is using qcow2 disk format
qcow2 is stored on a Proxmox VM storage pool - local NFS storage host with Gig ether connectivity (ie, modest but acceptable performance generally)
basic scenario
- snapshot taken yesterday to make sure it worked as expected
- snapshot was deleted / changes merged in a short while later.
- another snap was done, and some work was done inside the VM
- at this point looking in proxmox GUI in 'snapshots' for the VM we can see one snapshot present
- then this morning, approx 12hrs after snap was created, tried to do the snap delete-merge again.
- by good(bad) luck the NFS server was doing a raid5 parity check when the snap merge process was kicked off. After a while it returned timeout errors because NFS performance was terrible
- made sure backups from last night are good to PBS server. so we have a rollback in case of 'dammit' problems.
- gently poked things in the following hour, had to unlock the VM which was still flagged as locked for snap deletion. There was nothing visible in terms of extra snap data file in the dir where the VM QCow2 files live. In hindsight I am not sure we actually expect new files to exist, or if the snaps are purely internal to the qcow2 files themselves logically.
- endgame, manually intervened and in proxmox ssh console, edited/removed the config stanza in the VM Config file under /etc/pve/qemu-server
ie, the top-half of the file is content I recognize
the lower-half was content related to the snap state
kept copy of the conf file prior to change/edit
once that was done, we no longer see any snapshot listed in the proxmox gui for this VM.
however, under the hood, in the "MONITOR" we can see this:
so there is clearly a trace of snap activity having taken place in the qcow
but
I think the 'size' column is telling me we've got zero bytes in the snaps
so
I am curious - if anyone is familiar with what we expect to see here in the monitor / the 'info snapshots' - is it normal that
- after a qcow2 has had a snapshot taken; then deleted.
- we will expect to see some place holder persist forever, even if there is no more data associated with the snap / we can assume snap is not active etc
or
will I be better off to do something to 'export' the VM and 'import' in order to get back to a clean slate where we have no unwanted linger trace of snap
I can't even tell if I expect PBS Backup to preserve the internal snap metadata. I am assuming yes, because I am guessing PBS is going to backup (the QCOW2 file as it stands) and if there is internal snap metadata, that is part of the bundle.
anyhoo. It maybe that this is all good and fine.
Hindsight, I know now to not play with snapshots while my NFS performance is poor / as it creates drama with timeout
ideally my goal here is to (a) better understand this for future reference, and (b) ideally be relatively confident the VM is good here in question, and not going to blow up with COW data buffer infinite growth with time.
Right now the VM in question is booted and working normally / looks good
so this is kind of a 'sanity check' sort of query
thank you for the help / reading this far.
Tim
I am not super familiar with how this is implemented and meant to work, and I can't find any hints when search forum, so I wanted to briefly ask if anyone can comment.
This is on a proxmox latest 8.0.4 host
VM is using qcow2 disk format
qcow2 is stored on a Proxmox VM storage pool - local NFS storage host with Gig ether connectivity (ie, modest but acceptable performance generally)
basic scenario
- snapshot taken yesterday to make sure it worked as expected
- snapshot was deleted / changes merged in a short while later.
- another snap was done, and some work was done inside the VM
- at this point looking in proxmox GUI in 'snapshots' for the VM we can see one snapshot present
- then this morning, approx 12hrs after snap was created, tried to do the snap delete-merge again.
- by good(bad) luck the NFS server was doing a raid5 parity check when the snap merge process was kicked off. After a while it returned timeout errors because NFS performance was terrible
- made sure backups from last night are good to PBS server. so we have a rollback in case of 'dammit' problems.
- gently poked things in the following hour, had to unlock the VM which was still flagged as locked for snap deletion. There was nothing visible in terms of extra snap data file in the dir where the VM QCow2 files live. In hindsight I am not sure we actually expect new files to exist, or if the snaps are purely internal to the qcow2 files themselves logically.
- endgame, manually intervened and in proxmox ssh console, edited/removed the config stanza in the VM Config file under /etc/pve/qemu-server
ie, the top-half of the file is content I recognize
the lower-half was content related to the snap state
kept copy of the conf file prior to change/edit
once that was done, we no longer see any snapshot listed in the proxmox gui for this VM.
however, under the hood, in the "MONITOR" we can see this:
Type 'help' for help.
# info snapshots
List of snapshots present on all disks:
None
List of partial (non-loadable) snapshots on 'drive-virtio1':
ID TAG VM SIZE DATE VM CLOCK ICOUNT
1 BeforeForceReplicate 0 B 2023-11-03 17:31:47 48:41:43.178
2 Before_____014Rejoin 0 B 2023-11-04 09:50:28 15:01:01.588
so there is clearly a trace of snap activity having taken place in the qcow
but
I think the 'size' column is telling me we've got zero bytes in the snaps
so
I am curious - if anyone is familiar with what we expect to see here in the monitor / the 'info snapshots' - is it normal that
- after a qcow2 has had a snapshot taken; then deleted.
- we will expect to see some place holder persist forever, even if there is no more data associated with the snap / we can assume snap is not active etc
or
will I be better off to do something to 'export' the VM and 'import' in order to get back to a clean slate where we have no unwanted linger trace of snap
I can't even tell if I expect PBS Backup to preserve the internal snap metadata. I am assuming yes, because I am guessing PBS is going to backup (the QCOW2 file as it stands) and if there is internal snap metadata, that is part of the bundle.
anyhoo. It maybe that this is all good and fine.
Hindsight, I know now to not play with snapshots while my NFS performance is poor / as it creates drama with timeout
ideally my goal here is to (a) better understand this for future reference, and (b) ideally be relatively confident the VM is good here in question, and not going to blow up with COW data buffer infinite growth with time.
Right now the VM in question is booted and working normally / looks good
so this is kind of a 'sanity check' sort of query
thank you for the help / reading this far.
Tim