existing bitmap was invalid and has been cleared

TwiX

Renowned Member
Feb 3, 2015
310
21
83
Hi,

I don't know why but since upgrade to v 6.3-2 (with ceph octopus), day after day all my backups are no longer using dirty-bitmaps. (PBS v1.0-5)
On all logs I have : existing bitmap was invalid and has been cleared.

What could be done in order to check what is going on ?

PVE
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 15.2.6-pve1
ceph-fuse: 15.2.6-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-6
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

PBS
Code:
()
proxmox-backup: 1.0-4 (running kernel: 5.4.73-1-pve)
proxmox-backup-server: 1.0.5-1 (running version: 1.0.5)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
ifupdown2: 3.0.0-1+pve3
libjs-extjs: 6.0.1-10
proxmox-backup-docs: 1.0.4-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-xtermjs: 4.7.0-3
smartmontools: 7.1-pve2
zfsutils-linux: 0.8.5-pve1
 
A reason could be that I run 2 backups via several PBS (local + Tuxis external) ?
 
A reason could be that I run 2 backups via several PBS (local + Tuxis external) ?
yes, the dirty bitmap can only be valid for the last backup made, since otherwise, you may miss some chunks

an example:

a vm has 3 chunks: A B
first backup to PBS1: gets chunks A B
chunk B changed:
second backup to PBS2 with dirty bitmap: only chunk B gets uploaded
now you are missing chunk A on PBS2

so everytime the last backup snapshot of the target server does not coincide with our last
made backup, we have to invalidate the dirty-bitmap
 
  • Like
Reactions: Stoiko Ivanov
I just came across this problem too.

So just to get this right, if we want to backup twice eg once to local datacenter, once to remote datacenter, we lose all the goodness of incremental etc and are effectively doing full backups every time?
 
I just came across this problem too.

So just to get this right, if we want to backup twice eg once to local datacenter, once to remote datacenter, we lose all the goodness of incremental etc and are effectively doing full backups every time?
To get this right, you lose the dirty bitmap but the backups will still be incremental as usual but the backup will read all the data.
 
I just wanted to chime in on this. We recently began migrating to PBS from VM side backups. We have run a combination of 'internal' / 'NAS' / 'External' backups. All the SMB servers we have setup have an internal backup SSD, an onsite external NAS backup, followed by a offsite backup.

The reason for the migration to PBS was for a multitude of factors but one of the main reasons was the time it was taking to do the backups, the method with how PBS works was reducing the backup times by many hours, and standardized our configuration across many different client setups.

Our first migration step was for the internal backups to be transitioned to the PBS system, worked fantastic. After a few weeks of that working perfectly fine, including testing individual file recovery and full disaster recovery testing we began to setup the PBS with a 2nd datastore to backup to the onsite NAS.

Once this was setup we noticed the dirty-bitmap status: existing bitmap was invalid and has been cleared message as well as the backup times increasing to problematic long timeframes. Our mistaken assumption was that a bitmap for the VM would be stored for each datastore independently.

I believe this is a major limitation of the Proxmox Backup Server, and I just had a few questions.

1) If a Proxmox Staff member could confirm if this is something that is going to be added as a feature, or could they comment as to if the design of PBS makes bitmaps stored on a per datastore config impossible and thus very unlikely to be added as a feature.

2) Is it possible to sync the PBS backups with an external location that does not run PBS. Eg. We manually schedule a PBS backup using cron followed by an rsync to an onsite NAS? (I know they would not show up in the PBS system but it could be a temporary workaround).

3) Is there any option of installing the full proxmox backup server on a prebuilt NAS? (eg. Synology/QNAP)

For now I am going to setup a 2nd PBS instance within a VM and see how well that would work.
 
1) If a Proxmox Staff member could confirm if this is something that is going to be added as a feature, or could they comment as to if the design of PBS makes bitmaps stored on a per datastore config impossible and thus very unlikely to be added as a feature.
i am not sure if it's possible to have multiple dirty-bitmaps per disk, but what exactly is the use case here?
would it not be better (and more efficient) to backup to one location and sync that location away to the other pbs instances?
this way the bitmap will be used and you only backup each block once?

2) Is it possible to sync the PBS backups with an external location that does not run PBS. Eg. We manually schedule a PBS backup using cron followed by an rsync to an onsite NAS? (I know they would not show up in the PBS system but it could be a temporary workaround).
you can rsync the pbs datastore somewhere else, but there are no consistency guarantees then (some backups may reference chunks that do not exists anymore, or you might have chunks that are not used)

3) Is there any option of installing the full proxmox backup server on a prebuilt NAS? (eg. Synology/QNAP)
if that system can run x86_64 debian, you should be able to install pbs (e.g. in a vm/container)
 
Hi,

In order to prevent a maximum of "existing bitmap was invalid and has been cleared" with multiples PBS this is what I do :

@01:00
Backup to Internal PBS => existing bitmap was invalid and has been cleared
@10:00
Backup to Internal PBS => OK (very fast)
@15:00
Backup to Internal PBS => OK (very fast)
@18:00
Backup to Internal PBS => OK (very fast)
@21:00
Backup to external Tuxis PBS => existing bitmap was invalid and has been cleared
@01:00
Backup to Internal PBS => existing bitmap was invalid and has been cleared

and so on...
 
i am not sure if it's possible to have multiple dirty-bitmaps per disk, but what exactly is the use case here?
would it not be better (and more efficient) to backup to one location and sync that location away to the other pbs instances?
this way the bitmap will be used and you only backup each block once?


you can rsync the pbs datastore somewhere else, but there are no consistency guarantees then (some backups may reference chunks that do not exists anymore, or you might have chunks that are not used)


if that system can run x86_64 debian, you should be able to install pbs (e.g. in a vm/container)

Our use case is a growing number of small businesses that have datasets that have begun to accelerate in size, this has begun to challenge our existing backup methods. What I want to do is have the most reliable and efficient backup that they can afford. Budgets are not big in my town, and many business are stretched extremely thin now.

My configuration for clients has been for a quite a number of years:
1) an internal local disk backup
2) an onsite external to server backup, we have been using small NAS systems for this
3) an offsite external backup, we use a MSP type backup system that backs up to our own hardware.

We want image (full recovery type backups) at all levels.
Internet capacity is limited 10 Mbps upload in nearly all cases, the odd location may have access to 20 Mbps upload capacity.
Datasets have a wide range for size but we want to ensure we can backup at least 3TB to all three destinations within a time period of 10 hours maximum.
Since internet capacity is limited the faster we can achieve the local backups the more time the offsite backup has to work with.

The current MSP software has been struggling, especially on block compare to find out what changed, so with PBS bitmap comparison setup we were seeing greatly improved backup time by virtue of reducing the time to decide what to backup from many hours to minutes.

The single bitmap per VM problem however removes that advantage. So now I am experimenting with PBS to find out the best configuration to achieve this or if we need to explore other alternatives.

I am setting up another bench config to see if I cam find a workaround.

Test 1) We run a PBS on the same hardware as the PVE server. This PBS will be used to backup to the local internal disk/datastore. Then I will add localhost as a 'Remotes' then config a sync to pull the internal disk datastore to the NAS datastore.

Test 2) We run a PBS on the same hardware as the PVE server. This PBS will be used to backup to the local internal disk/datastore. Then I have a 2nd PBS setup within a VM on the same server, that PBS is configured with a datastore pointing at the onsite NAS. I schedule the remote/sync so the VM PBS pulls the backups from the baremetal PBS (internal disk datastore) to the NAS datastore.

My focus is on the localized backups to the internal disk and the NAS. If I can get those to work I'll move onto seeing if I can replace the MSP backup software we use with PBS.
 
I just wanted to provide an update on this. We have been running a couple deployments of PVE/PBS with an local disk datastore, then added a 2nd datastore pointing to the NAS. We configured a Remotes and Sync as 'localhost' and backups and restores are working great.

The dirty bitmap status for backups to the local disk/datastore are coming back OK, so fast incremental mode is working properly.
Then the remote sync triggers on a half hour basis and updates the external NAS.

We have completed a disaster recovery test by connecting the NAS to a new PVE/PBS setup and 'imported' the existing datastore, then preformed a restore without issue.

Once I get time to test a remote (external) sync in addition to the what we are doing now I will provide another update.

For now this is working quite well.
 
i am not sure if it's possible to have multiple dirty-bitmaps per disk, but what exactly is the use case here?
would it not be better (and more efficient) to backup to one location and sync that location away to the other pbs instances?
this way the bitmap will be used and you only backup each block once?
So, I have a store 'pool' - a clone/instance of PBS with another store called "pool2".
I make pool2 a remote to "sync" from the 1st instance 'pool'?
Ugh. I have like 5 + stores I need to duplicate as "remotes" simply for keeping dirty bitmaps....
 
The current MSP software has been struggling, especially on block compare to find out what changed, so with PBS bitmap comparison setup
I just wanted to provide an update on this. We have been running a couple deployments of PVE/PBS with an local disk datastore, then added a 2nd datastore pointing to the NAS. We configured a Remotes and Sync as 'localhost' and backups and restores are working great.

The dirty bitmap status for backups to the local disk/datastore are coming back OK, so fast incremental mode is working properly.
Then the remote sync triggers on a half hour basis and updates the external NAS.

We have completed a disaster recovery test by connecting the NAS to a new PVE/PBS setup and 'imported' the existing datastore, then preformed a restore without issue.

Once I get time to test a remote (external) sync in addition to the what we are doing now I will provide another update.

For now this is working quite well.
Can you explain this a bit more?

So one PBS server. One datastore....
Create remote called "localhost"
Same creds, login, etc.
On the "new" datastore, have a sync job using "localhost" and select your main? This is what I am testing now, that's what I THINK you mean.

So your new store is syncing your main.

This method doesn't really work for differing sized disks (E.G. can't sync across, randomly, or split the sync to different stores for differing disks sizes, so I may need to split up smaller backups to a different store and to sync it to similarly sized disks.)
So, 2 16 TB disks can probably sync each other...
But for my several 1 TB disks for smaller apps will need separate backup jobs to a new main sore and then I can sync it to the similar sized spare disks.
Although, if the app is less than 300-600 GB I may just be able to leave it the way it is now and recreate bitmaps...

Perhaps some flexibility here and let us select specific backups to sync to another source, because doing it all from one source to another may not be wanted (as I say above, differing space)
 
Last edited:
How it should works with replication/migrated VMs?
Cluster. ZFS. PVE's replication between nodes. One PBS server.
VM gets backed up on node A, migrated to node B.
Backup from node B says bitmap invalid.
Is this correct behaviour?
What will happen when VM migrates back to A?
 
the bitmap should only be invalid if you
- change the backup target (this thread) or encryption mode
- change something about the disk (size, storage/format, ..)
- purge/forget the backup snapshot the bitmap represents

it is also lost if you stop the VM in any fashion other than live migration. if you do a live migration and the bitmap becomes invalid as a result, please post details (e.g., backup task log, migration task log, backup task log, ideally back to back to rule out any kind of interference by other actions).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!