Can't boot VM "Bitmap '' doesn't satisfy the constraints"

Oct 30, 2025
22
1
3
Hi,

we were having trouble backing up a VM getting this error:
'/dev/proxmox_cluster1_1/vm-112-disk-1.qcow2': Bitmap '' doesn't satisfy the constraints

Now we tried rebooting the VM - and it does not boot anymore with the same error.

This error is quite unique and we can't find much information online about this. If anyone knows anything about this, it would really help.

thanks!
 
qemu-img check and qemu-img info all end in the same error.

With lvcreate and DD we are able to create a copy of the disk.

On the copy we tried to remove the bitmap with, but it also ends in the same error:
root@hi0s0046:/dev# qemu-img bitmap --remove /dev/proxmox_cluster1_1/vm-112-disk-1-copy2 ""
qemu-img: Could not open '/dev/proxmox_cluster1_1/vm-112-disk-1-copy2': Bitmap '' doesn't satisfy the constraints

As next step we will try to convert the disk with the operator --skip-broken-bitmaps.
qemu-img convert -p \
-f qcow2 -O qcow2 \
--bitmaps --skip-broken-bitmaps
 
In case anyone ever encounters this, this is how we fixed it. proxmox_cluster1_1 is one of our LVM VG, you need to replace it with the name of your VG.

(Optional) Get overview which disks are present on the LVM and which are active on the node
lvdisplay proxmox_cluster1_1 | grep -Ei 'status|path'

Read size from original disk
SIZE=$(lvs --noheadings -o lv_size --units B --nosuffix "/dev/proxmox_cluster1_1/vm-112-disk-7.qcow2" | xargs)

Create new LV, one as a copy and one as the final fixed disk
lvcreate -n vm-112-disk-7-copy.qcow2 -L "${SIZE}B" proxmox_cluster1_1
lvcreate -n vm-112-disk-7-fixed.qcow2 -L "${SIZE}B" proxmox_cluster1_1

Copy data from original disk to copy disk
dd if=/dev/proxmox_cluster1_1/vm-112-disk-7.qcow2 of=/dev/proxmox_cluster1_1/vm-112-disk-7-copy.qcow2 bs=64K status=progress

Define SRC Parameter (Copy Disk)
SRC=/dev/proxmox_cluster1_1/vm-112-disk-7-copy.qcow2

Read header from copy disk
dd if="$SRC" of=/root/vm-112-disk-7.qcow2.header.bin bs=1M count=4 status=progress

Read offset from copy disk. Whatever number this command gives back, define as offset in next command
LC_ALL=C grep -aob $'\x23\x85\x28\x75' /root/vm-112-disk-7.qcow2.header.bin

Define offset
OFFSET=504

Rewrite offset
printf '\xFF\xFF\xFF\xFF' | dd of="$SRC" bs=1 seek=$OFFSET conv=notrunc status=none

Convert disk with fixed offset from qcow2 to qcow2
qemu-img convert -p -f qcow2 -O qcow2 /dev/proxmox_cluster1_1/vm-112-disk-7-copy.qcow2 /dev/proxmox_cluster1_1/vm-112-disk-7-fixed.qcow2

Attach disk to VM
qm set 112 --scsi6 proxmox_cluster1_1:vm-112-disk-7-fixed.qcow2,discard=on,iothread=1,ssd=1
 
Last edited:
  • Like
Reactions: noxxville
Currently we are trying to find the cause for this. Opened a Ticket both @proxmox and @Veeam.
Looking through the logs we can reproduce this timeline:

1. We shut down the VM and change CPU type to v3 + do Windows Updates. During the shutdown this error appears.
Jan 22 18:03:35 hi0s0046 QEMU[3316804]: kvm: Lost persistent bitmaps during inactivation of node 'fb92e149194ca81ff84c89a2be547af': Failed to write bitmap 'VeeamTmp_6923045e-dac9-425d-b57f-8f5b1ad5801f_4c7261fa-49ea-43cf-ad7c-bd7b3ce93667' to file: No space left on device
2. After this error we are able to boot the VM. Immediately this doesn't result in any problem with the VM. ~3 hours later Veeam trys to backup the VM which results in:
1770957981641.png
3. The VM still runs without problems, but the backup fails.
4. After this, we try to restart the VM as an attempt to resolve the backup problems. Thats when the VM does not boot anymore, every start attempt ends in the "doesn't satisfy constraints" error.

I found this Bug-Report which reads similarly to what we enountered with the VM on 22.01., but it is marked as resolved.
https://bugzilla.redhat.com/show_bug.cgi?id=2147617
 
These are the Veeam Logs of the day the backup first failed.

Section where we can see that he tries to remove a Bitmap. On backups before the problem occurred he didn't do this.
2026-01-22 21:12:47.1268 00007 [16319] INFO | [SshClientUtils]: Start executing ssh command "pvesh get storage/local --output json"
2026-01-22 21:12:48.0492 00007 [16319] INFO | [SshClientUtils]: The SSH command has been executed: status Code 0, result: "{"content":"vztmpl,backup,iso","digest":"3fab4365849627e3a0f96d3bde0e05dd73b9345c","path":"/var/lib/vz","storage":"local","type":"dir"}
", error: ""
2026-01-22 21:12:48.1136 00007 [16319] INFO | [SshClientUtils]: Start executing ssh command "qemu-img create -F qcow2 -b "/dev/proxmox_cluster1_1/vm-112-disk-1.qcow2" -f qcow2 /var/lib/vz/VeeamTmp_112_drv-scsi0_66fa7254-bd42-420b-acee-4c99e02cebc9.qcow2"
2026-01-22 21:12:48.2073 00007 [16319] INFO | [SshClientUtils]: The SSH command has been executed: status Code 0, result: "Formatting '/var/lib/vz/VeeamTmp_112_drv-scsi0_66fa7254-bd42-420b-acee-4c99e02cebc9.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=128849018880 backing_file=/dev/proxmox_cluster1_1/vm-112-disk-1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
", error: ""
2026-01-22 21:12:48.2073 00007 [16319] INFO | [VmbApiExtensions]: Checking available space on the snapshot storage...
2026-01-22 21:12:48.2073 00007 [16319] INFO | [VmbApiExtensions]: Snapshot storage on VeeamTmp_112_drv-scsi0 has 65% free space
2026-01-22 21:12:48.2073 00007 [16319] INFO | [NbdEngine]: Successfully prepared snapshot information for the disk "proxmox_cluster1_1:vm-112-disk-1.qcow2". The snapshot file path is "/var/lib/vz/VeeamTmp_112_drv-scsi0_66fa7254-bd42-420b-acee-4c99e02cebc9.qcow2"
2026-01-22 21:12:48.2256 00007 [16319] INFO | [NbdEngine]: The previous CBT tag (vmb): "{"checkpoint_id":"VeeamTmp_6923045e-dac9-425d-b57f-8f5b1ad5801f_4c7261fa-49ea-43cf-ad7c-bd7b3ce93667","version":"2.0","disk_format":null,"custom_values":{}}"
2026-01-22 21:12:48.2256 00007 [16319] INFO | [BackupUtils]: CBT tag has version "2.0"
2026-01-22 21:12:48.2256 00007 [16319] INFO | [DirtyBitmapUtils]: Skipping the bitmap "VeeamTmp_VeeamZIP_00000000-0000-0000-0000-000000000000" from processing: System.Exception: The bitmap VeeamTmp_VeeamZIP_00000000-0000-0000-0000-000000000000 belongs to another job
2026-01-22 21:12:48.2256 00007 [16319] INFO | [DirtyBitmapUtils]: at Veeam.Vbf.BackupAgent.BackupProxmox.Utils.DirtyBitmapUtils.PrepareCurrentBitmapAsync(QmpCommands qmpCommands, BlockDevice device, String currentJobId, String previousCheckpointId, CancellationToken cancellationToken)
2026-01-22 21:12:48.2256 00007 [16319] INFO | [QmpCommands]: Removing the bitmap "VeeamTmp_6923045e-dac9-425d-b57f-8f5b1ad5801f_4c7261fa-49ea-43cf-ad7c-bd7b3ce93667"...
2026-01-22 21:12:48.2256 00007 [16319] INFO | [QmpClient]: Executing the QMP command...
2026-01-22 21:12:48.2256 00007 [16319] INFO | [SshQmpSocket]: Text sent to the QMP device:
"{
"arguments": {
"node": "f9c7d8ab4d8eee1df874a157afa51fa",
"name": "VeeamTmp_6923045e-dac9-425d-b57f-8f5b1ad5801f_4c7261fa-49ea-43cf-ad7c-bd7b3ce93667"
},
"execute": "block-dirty-bitmap-remove"
}"
2026-01-22 21:12:48.3682 00007 [16319] INFO | [SshQmpSocket]: Text received from QMP device:
"{"return": {}}"
2026-01-22 21:12:48.3683 00007 [16319] INFO | [QmpClient]: Received the next QMP response: "{"return": {}}"
2026-01-22 21:12:48.3683 00007 [16319] INFO | [QmpCommands]: Successfully the removed bitmap "VeeamTmp_6923045e-dac9-425d-b57f-8f5b1ad5801f_4c7261fa-49ea-43cf-ad7c-bd7b3ce93667"
2026-01-22 21:12:48.3683 00007 [16319] INFO | [NbdEngine]: Successfully added the snapshot: Snapshot { DeviceName = drive-scsi0, DeviceNodeName = f9c7d8ab4d8eee1df874a157afa51fa, SnapshotName = VeeamTmp_112_drv-scsi0, BitmapName = VeeamTmp_6923045e-dac9-425d-b57f-8f5b1ad5801f_66fa7254-bd42-420b-acee-4c99e02cebc9, DisabledBitmapName = , DeviceFormat = qcow2, FilePath = /var/lib/vz/VeeamTmp_112_drv-scsi0_66fa7254-bd42-420b-acee-4c99e02cebc9.qcow2 }
Section with "satisfy constraints" error
2026-01-22 21:12:52.5903 00048 [16319] INFO | [QmpCommands]: Successfully started COW backup jobs
2026-01-22 21:12:52.5922 00048 [16319] INFO | [QmpCommands]: Obtaining block devices...
2026-01-22 21:12:52.5922 00048 [16319] INFO | [QmpClient]: Executing the QMP command...
2026-01-22 21:12:52.5923 00048 [16319] INFO | [SshQmpSocket]: Text sent to the QMP device:
"{
"execute": "query-block"
}"
2026-01-22 21:12:52.5938 00048 [16319] INFO | [SshQmpSocket]: Text received from QMP device:
"{"error": {"class": "GenericError", "desc": "Bitmap '' doesn't satisfy the constraints"}}"
2026-01-22 21:12:52.5938 00048 [16319] INFO | [QmpClient]: Received the next QMP response: "{"error": {"class": "GenericError", "desc": "Bitmap '' doesn't satisfy the constraints"}}"
2026-01-22 21:12:52.5955 00048 [16319] ERROR | [QmpClient]: Failed to execute the QMP command ["{
"execute": "query-block"
}"]
2026-01-22 21:12:52.5955 00048 [16319] ERROR | [QmpCommands]: Failed to obtain the list of block devices: Veeam.Vbf.Common.Exceptions.ExceptionWithDetail: Bitmap '' doesn't satisfy the constraints
2026-01-22 21:12:52.5955 00048 [16319] ERROR | [QmpCommands]: at Veeam.Vbf.Project.Utilities.Qemu.Qmp.QmpClient.ReadExecutionResultAsync(CancellationToken cancellationToken)
2026-01-22 21:12:52.5955 00048 [16319] ERROR | [QmpCommands]: at Veeam.Vbf.Project.Utilities.Qemu.Qmp.QmpClient.ExecuteAsync(BaseQmpCommand command, CancellationToken cancellationToken)
2026-01-22 21:12:52.5955 00048 [16319] ERROR | [QmpCommands]: at Veeam.Vbf.Project.Utilities.Qemu.Qmp.QmpCommands.GetBlockDevicesAsync(CancellationToken cancellationToken)
2026-01-22 21:12:52.5969 00048 [16319] ERROR | [NbdEngine]: "Failed to prepare disks for backup": Veeam.Vbf.Common.Exceptions.ExceptionWithDetail: Bitmap '' doesn't satisfy the constraints
2026-01-22 21:12:52.5969 00048 [16319] ERROR | [NbdEngine]: at Veeam.Vbf.Project.Utilities.Qemu.Qmp.QmpClient.ReadExecutionResultAsync(CancellationToken cancellationToken)
2026-01-22 21:12:52.5969 00048 [16319] ERROR | [NbdEngine]: at Veeam.Vbf.Project.Utilities.Qemu.Qmp.QmpClient.ExecuteAsync(BaseQmpCommand command, CancellationToken cancellationToken)
2026-01-22 21:12:52.5969 00048 [16319] ERROR | [NbdEngine]: at Veeam.Vbf.Project.Utilities.Qemu.Qmp.QmpCommands.GetBlockDevicesAsync(CancellationToken cancellationToken)
2026-01-22 21:12:52.5969 00048 [16319] ERROR | [NbdEngine]: at Veeam.Vbf.BackupAgent.BackupProxmox.Engine.NbdBackupEngine.UpdateBlockDevicesDictAsync(CancellationToken cancellationToken)
2026-01-22 21:12:52.5969 00048 [16319] ERROR | [NbdEngine]: at Veeam.Vbf.BackupAgent.BackupProxmox.Engine.NbdBackupEngine.PrepareDisksForBackupAsync(List`1 disksToBackup, IVbrVmBackupSession vbrBackupSession, Int32 vmId, Boolean isGuestProcessingEnabled, CancellationToken cancellationToken)
2026-01-22 21:12:52.6017 00048 [16319] INFO | [QmpCommands]: Stopping the NBD server...
2026-01-22 21:12:52.6017 00048 [16319] INFO | [QmpClient]: Executing the QMP command...
2026-01-22 21:12:52.6017 00048 [16319] INFO | [SshQmpSocket]: Text sent to the QMP device:
"{
"execute": "nbd-server-stop"
}"
2026-01-22 21:12:52.6032 00026 [16319] INFO | [SshQmpSocket]: Text received from QMP device:
"{"return": {}}"
2026-01-22 21:12:52.6033 00026 [16319] INFO | [QmpClient]: Received the next QMP response: "{"return": {}}"
2026-01-22 21:12:52.6033 00026 [16319] INFO | [QmpCommands]: Successfully stopped the NBD server
2026-01-22 21:12:52.6067 00048 [16319] INFO | [QmpCommands]: Stopping the COW backup jobs...
2026-01-22 21:12:52.6076 00048 [16319] INFO | [QmpCommands]: Canceling the block job "VeeamTmp_drive-scsi5"...
2026-01-22 21:12:52.6076 00048 [16319] INFO | [QmpClient]: Executing the QMP command...
2026-01-22 21:12:52.6087 00048 [16319] INFO | [SshQmpSocket]: Text sent to the QMP device:
"{
"arguments": {
"device": "VeeamTmp_drive-scsi5"
},
"execute": "block-job-cancel"
}"
So in the beginning of the backupjob he can still query information from the disk, e.g. the sees there is a Bitmap.
And later on in the job the error appears and every subsequent command to the disk fails. BUT the disk is still accessible by the VM.
 
So the first error we see is in the journal log.

Jan 22 18:03:35 hi0s0046 QEMU[3316804]: kvm: Lost persistent bitmaps during inactivation of node 'fb92e149194ca81ff84c89a2be547af': Failed to write bitmap 'VeeamTmp_6923045e-dac9-425d-b57f-8f5b1ad5801f_4c7261fa-49ea-43cf-ad7c-bd7b3ce93667' to file: No space left on device

I found this Bug-Report. https://bugzilla.redhat.com/show_bug.cgi?id=2147617
But this is already marked as fixed since 6.2-30
BUT someone in this discussion posted the commands to reproduce this error. And for me, this error is still reproduceable. So now I am wondering if this Bug has been re-introduced for some reason. Maybe someone else can try this and confirm?

Scenario 1: commit with error
Test Steps:
1. Create lv devices
#qemu-img create -f raw test.img 400M
#losetup /dev/loop0 test.img
#pvcreate /dev/loop0
#vgcreate test /dev/loop0
#lvcreate -n base --size 128M test
#lvcreate -n top --size 128M test

2. Create base image and add 6 bitmaps to it.
#qemu-img create -f qcow2 /dev/test/base 128M
#qemu-img bitmap --add /dev/test/base stale-bitmap-1
#qemu-img bitmap --add /dev/test/base stale-bitmap-2
#qemu-img bitmap --add /dev/test/base stale-bitmap-3
#qemu-img bitmap --add /dev/test/base stale-bitmap-4
#qemu-img bitmap --add /dev/test/base stale-bitmap-5
#qemu-img bitmap --add /dev/test/base stale-bitmap-6
#qemu-img bitmap --add /dev/test/base stale-bitmap-7

3. Create snapshot image, add a bitmap to it
#qemu-img create -f qcow2 /dev/test/top -F qcow2 -b /dev/test/base
#qemu-img bitmap --add /dev/test/top good-bitmap

4. Fullwrite top
# qemu-io -f qcow2 /dev/test/top -c "write 0 126M"
wrote 132120576/132120576 bytes at offset 0
126 MiB, 1 ops; 00.20 sec (624.019 MiB/sec and 4.9525 ops/sec)

5. Commit from base to top
#qemu-img commit -f qcow2 -t none -b /dev/test/base -d -p /dev/test/top
(100.00/100%)
qemu-img: Lost persistent bitmaps during inactivation of node '#block397': Failed to write bitmap 'stale-bitmap-7' to file: No space left on device
qemu-img: Lost persistent bitmaps during inactivation of node '#block397': Failed to write bitmap 'stale-bitmap-7' to file: No space left on device
qemu-img: Error while closing the image: Invalid argument
Edit: I misinterpreted the Bug. The Bug was that previously no error was shown, even though the bitmap was not written. So "failed to write" message IS the fix, as it shows that indeed the Bitmap was not written.
 
Last edited: