VM backup on PVE node restart from zero

andrea68

Renowned Member
Jun 30, 2010
158
2
83
Hi,

after a resize operation disk storage on PBS and an update from 2.3 to 2.4 on a node PVE connected to this PBS the VM start backupped from zero (ex. VM 1'' with drive of 250GB restart to copy all 250GB)...

Why this is happened?

Tnx!

Schermata 2023-05-03 alle 16.15.13.jpg
 
Hi,
after a resize operation disk storage on PBS
do you actually mean PBS here? What exactly did you resize?

and an update from 2.3 to 2.4 on a node PVE connected to this PBS the VM start backupped from zero (ex. VM 1'' with drive of 250GB restart to copy all 250GB)...
It does not. The log says that 214 GiB are dirty and reads only those. When was the last backup? What is running in the VM?
 
Sorry for the mistake: I resize a VM disk on the PVE connected to PBS.
After this operation I see that next backup are executed, almost the whole disk are marked as "dirty" so it takes more than an hour to do all backup instead few minutes compared to backups before that.
Is that normal?
When you resize a VM drive it will be treated as a new one?
Just a clarification on how it works, tnx...!
 
Sorry for the mistake: I resize a VM disk on the PVE connected to PBS.
After this operation I see that next backup are executed, almost the whole disk are marked as "dirty" so it takes more than an hour to do all backup instead few minutes compared to backups before that.
Is that normal?
Yes, but for me it actually consider the dirty bitmap as invalid after resizing
Code:
INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared
I think either the data structure used for bitmaps is such that it's not easily possible to "resize" or at least it's not implemented.

Was this your first backup attempt after resizing? Please share the output of the following
Code:
pveversion -v
qm status 200 --verbose | grep running-qemu
qm config 200