So this means, that a «storage move» (like from one ceph pool to another or from ceph to local or nfs) would kill the bitmap?
My current understanding is that, yes, the bitmap could be potentially dropped if the live-migration involves a storage migration from one kind of storage to another, or if the the storages are configured differently. For example if it is a local zfs storage on each node but they have different zfs/zpool properties on both sides (See the manual pages `man zfsprops` and `man zpoolprops`). Different storages could lead to the target disk image being allocated an ever-so-slightly bigger size than the source disk image's (e.g. if the target storage has a bigger granularity and the disk image size is not aligned to it). The bitmap keeps track of whether each block in the guest has seen changes, if a migration results in the disk image having a different number of blocks, then I can imagine it being invalid and discarded.