Query on the dirty bitmap

Prashant Patil

New Member
Feb 20, 2025
11
1
3
Hello All,
Hope this email finds you well.

I have been trying with proxmox for a while now, and have come across a problem specific to dirty bitmaps. I have enabled bitmap on the qcow2 disk image using 'qemu-img bitmap' command, exposed the bitmap over a unix socket using 'qemu-nbd' command. Now when I try to read the bitmap using 'qemu-img map' command with 'x-dirty-bitmap=qemu:dirty-bitmap:{bitmap}' option, I get one single extent which shows that the entire disk is dirty. Note that the disk size is 5 GB, and has only a few MB of data in it, and had added very small data after the bitmap was enabled. Bitmap output has been pasted below.

[{ "start": 0, "length": 5368709120, "depth": 0, "present": true, "zero": false, "data": true, "compressed": false, "offset": 0}]

Can someone please help me understand why the bitmap content shows the entire disk as dirty?

Regards
Prashant
 
Hi,
could you please share the commands you are using as well as the output of pveversion -v?
 
Here are the commands that were run:
  1. # qemu-img bitmap -f qcow2 --add /mnt/pve/Riyaj-ext4/images/101/vm-101-disk-4.qcow2 b0
  2. # qemu-nbd --fork --persistent --shared=5 --read-only -f qcow2 -B b0 /mnt/pve/Riyaj-ext4/images/101/vm-101-disk-4.qcow2 --socket=/var/log/pp/sock4-0
  3. # IMG=driver=nbd,server.type=unix,server.path=/var/log/pp/sock4-0
  4. # qemu-img map --output=json --image-opts "$IMG,x-dirty-bitmap=qemu:dirty-bitmap:b0"
[{ "start": 0, "length": 5368709120, "depth": 0, "present": true, "zero": false, "data": true, "compressed": false, "offset": 0}]

# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-4-pve)
pve-manager: 8.3.0 (running version: 8.3.0/c1689ccb1065a83b)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-4
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.0
libpve-storage-perl: 8.2.9
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.2.9-1
proxmox-backup-file-restore: 3.2.9-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.1
pve-cluster: 8.0.10
pve-container: 5.2.2
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-1
pve-ha-manager: 4.0.6
pve-i18n: 3.3.1
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.0
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1
 
Code:
[I] root@pve8a1 ~# qemu-img create -f qcow2 disk.qcow2 1G
Formatting 'disk.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=1073741824 lazy_refcounts=off refcount_bits=16
[I] root@pve8a1 ~# qemu-img bitmap -f qcow2 --add disk.qcow2 b0
[I] root@pve8a1 ~# dd if=/dev/urandom bs=1K count=1 | qemu-img dd of=disk.qcow2 -n -f raw -O qcow2 osize=1G isize=1024
1+0 records in
1+0 records out
1024 bytes (1.0 kB, 1.0 KiB) copied, 2.7382e-05 s, 37.4 MB/s
[I] root@pve8a1 ~# qemu-img info disk.qcow2
image: disk.qcow2
file format: qcow2
virtual size: 1 GiB (1073741824 bytes)
disk size: 408 KiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    bitmaps:
        [0]:
            flags:
                [0]: auto
            name: b0
            granularity: 65536
    refcount bits: 16
    corrupt: false
    extended l2: false
Child node '/file':
    filename: disk.qcow2
    protocol type: file
    file length: 640 KiB (655872 bytes)
    disk size: 408 KiB
[I] root@pve8a1 ~# qemu-nbd --fork --persistent --shared=5 --read-only -f qcow2 -B b0 disk.qcow2 --socket=/tmp/nbd.sock
[I] root@pve8a1 ~# qemu-img map --output=json --image-opts "driver=nbd,server.type=unix,server.path=/tmp/nbd.sock,x-dirty-bitmap=qemu:dirty-bitmap:b0"
[{ "start": 0, "length": 65536, "depth": 0, "present": false, "zero": false, "data": false, "compressed": false},
{ "start": 65536, "length": 1073676288, "depth": 0, "present": true, "zero": false, "data": true, "compressed": false, "offset": 65536}]
Works as expected here. In the result, only the first cluster is dirty. I think what's confusing is the naming in the qemu-img map output though as the labels are just recycled from the usual allocation mapping I think. To get a more readable output, use
Code:
[I] root@pve8a1 ~# nbdinfo --map=qemu:dirty-bitmap:b0 'nbd+unix:///?socket=/tmp/nbd.sock' --json
[
{ "offset": 0, "length": 65536, "type": 1, "description": "dirty"},
{ "offset": 65536, "length": 1073676288, "type": 0, "description": "clean"}
]
 
Yes, me too tried creating disk image with 'qemu-img create', enabling bitmap on it, adding some data to the disk image with 'qemu-io', then when i read bitmap contents with 'qemu-img map' then I am able to get valid changed blocks/extents information. But when I do the same steps with the disk attached to a VM (step without creating qcow image manually), then I do not get expected output for the dirty bitmap. Why do we have this different behavior when the disk image is attached to a VM? Are there any different steps to get valid bitmap information in such case?
 
I was able to read the bitmap contents of the disk image attached to a running VM with series of QMP commands like block-dirty-bitmap-add, block-dirty-bitmap-disable, nbd-server-start, nbd-server-add and then with qemu-img map to read the bitmap contents. Please let me know if there are more simple steps to do the same thing.

Regards
Prashant
 
  • Like
Reactions: fiona
I was able to read the bitmap contents of the disk image attached to a running VM with series of QMP commands like block-dirty-bitmap-add, block-dirty-bitmap-disable, nbd-server-start, nbd-server-add and then with qemu-img map to read the bitmap contents. Please let me know if there are more simple steps to do the same thing.
No, I don't think there are simpler steps if you want an NBD export of the bitmap. (Of course the need for block-dirty-bitmap-disable depends on the exact use case).
 
No, I don't think there are simpler steps if you want an NBD export of the bitmap. (Of course the need for block-dirty-bitmap-disable depends on the exact use case).
What are the steps if NBD export of bitmap is not needed? I only want to read the bitmap of running VM's disk.
 
Last edited:
  • Like
Reactions: Prashant Patil
Hello,

I am trying to get the dirty bitmaps/changed blocks for incremental backup of Proxmox vm.
I tried all the commands mentioned in the thread, but it is not able to identify offset as 'dirty'.

1) qemu-img bitmap --add -f qcow2 disk.qcow2 bitmap1 OR block-dirty-bitmap-add (Tried both ways to add bitmap)

2) qemu-nbd --fork --persistent --shared=5 --read-only -f qcow2 -B bitmap1 disk.qcow2 --socket=/tmp/nbd.sock

3) nbdinfo --map=qemu:dirty-bitmap:bitmap1 'nbd+unix:///?socket=/tmp/nbd.sock' --json

When I am using (qemu-io -f qcow2 disk.qcow2 -c "write 0 1024") then only it is able to identify offsets as 'dirty' but when I download, copy, move or install any apps or files then "nbdinfo" is not able to identify dirty bitmap & is giving output as 'clean'.
[
{ "offset": 0, "length": 34359738368, "type": 0, "description": "clean"}
]

Can someone please help me with this? I want to get the dirty bitmaps of disk for incremental backup.

Regards,
Arshiya
 
Hi,
did you sync the disks inside the guest? If the guest is running, you should always use the block-dirty-bitmap-add QMP command. What parameters did you use exactly?

If you want to implement a backup solution for Proxmox VE, it's strongly recommended to use the backup provider API:
https://lore.proxmox.com/pve-devel/20250404133204.239783-1-f.ebner@proxmox.com/ (skip the changelogs at the beginning to get to the interesting description)
https://git.proxmox.com/?p=pve-stor...2d57f43b0802b8ce6d0a75ff43de5ce858767;hb=HEAD (best to read it with perldoc /path/to/Base.pm on your local system)

Some examples are here:
https://lore.proxmox.com/pve-devel/20250407120439.60725-1-f.ebner@proxmox.com/