Hi there,
I encountered a problem while deleting a snapshot. The error was:
I use Nakivo B&R v11 to backup the VMs and it was unable to perform a backup because it could not create a temporary snapshot, that's why I tried to delete the manually taken snapshot what ended with said error. I looked a bit further and tried
The PVE version is 8.3.0.
The VM images are file based qcow2 images. The weird thing is that now there is an additional qcow2 disk.
- files:
- image info:
The timestamp of vm-106-disk-1.qcow2 is 20:00 which is the time the Nakivo backup job is running. So it seems to me something went wrong there. VMs without manual snapshot do not have this problem.
Is there a way to solve this? Like, merging the images? I don't know what will happen if I restart the VM as the image in the config is the "old" one.
Recreating the VMs would be my last resort. Any help and tips are appreciated.
Thanks,
Morris
I encountered a problem while deleting a snapshot. The error was:
VM 106 qmp command 'blockdev-snapshot-delete-internal-sync' failed - Snapshot with id 'null' and name 'pre-2025_01_22-01' does not exist on device 'drive-scsi0
. I use Nakivo B&R v11 to backup the VMs and it was unable to perform a backup because it could not create a temporary snapshot, that's why I tried to delete the manually taken snapshot what ended with said error. I looked a bit further and tried
qm delsnapshot 106 pre-2025_01_22-01 --force
which seemed to remove the snapshot from the VM config, but it was still not possible to create new snapshots.The PVE version is 8.3.0.
The VM images are file based qcow2 images. The weird thing is that now there is an additional qcow2 disk.
- files:
Code:
26G -rw-r----- 1 root root 32G Feb 4 07:55 a41d03c1-bffe-4b87-8e3a-b3c414ca4651.qcow2
620K -rw-r----- 1 root root 4.4M Dec 19 12:11 vm-106-cloudinit.qcow2
836K -rw-r----- 1 root root 961K Feb 1 09:53 vm-106-disk-0.qcow2
54G -rw-r----- 1 root root 54G Jan 24 20:00 vm-106-disk-1.qcow2
Code:
image: vm-106-disk-1.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 53.7 GiB
cluster_size: 65536
Snapshot list:
ID TAG VM_SIZE DATE VM_CLOCK ICOUNT
1 pre-2025_01_22-01 0 B 2025-01-22 20:40:34 0824:29:25.646 --
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
bitmaps:
[0]:
flags:
[0]: in-use
[1]: auto
name: bitmap-nbr-302b0fbc-3569-4c84-911a-898809f56659
granularity: 4194304
[1]:
flags:
[0]: in-use
[1]: auto
name: bitmap-nbr-cd3ec73c-104c-4346-b735-b91a45011af9
granularity: 4194304
[2]:
flags:
[0]: auto
name: bitmap-nbr-c9f1c0c9-2386-4b2c-baa7-fce5668d9aa8
granularity: 4194304
[3]:
flags:
[0]: auto
name: bitmap-nbr-ef1dc071-eb18-4f5f-8584-4fe5573a7fe1
granularity: 4194304
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: vm-106-disk-1.qcow2
protocol type: file
file length: 53.9 GiB (57913049088 bytes)
disk size: 53.7 GiB
Code:
image: a41d03c1-bffe-4b87-8e3a-b3c414ca4651.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 25.1 GiB
cluster_size: 65536
backing file: /var/lib/vz/images/106/vm-106-disk-1.qcow2
backing file format: qcow2
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: a41d03c1-bffe-4b87-8e3a-b3c414ca4651.qcow2
protocol type: file
file length: 31.8 GiB (34128920576 bytes)
disk size: 25.1 GiB
The timestamp of vm-106-disk-1.qcow2 is 20:00 which is the time the Nakivo backup job is running. So it seems to me something went wrong there. VMs without manual snapshot do not have this problem.
Is there a way to solve this? Like, merging the images? I don't know what will happen if I restart the VM as the image in the config is the "old" one.
Recreating the VMs would be my last resort. Any help and tips are appreciated.
Thanks,
Morris