Proxmox Backup Server - backup runs ok, restoration destroys VMs

digitalscribe

New Member
Mar 29, 2022
22
0
1
Romania
Hello all,
I have a Proxmox server (ZFS 2 way mirrored) and a Proxmox Backup server (ZFS 3 way mirrored) that I'm using for backup and recovery.
It is the second time this year that when playing arround with a VM (backup, modify VM files like qcow2 compression and so on, restore) that I get a corrupted VM.

I use VMs mostly for separated development environments because of convenience, no need to delve into the whys, the thing is that I run licensed software (say Embarcadero Delphi for example) that have some miserable licensing traits (like, limited number of installations that you can run before you have to go begging of them to increase the limit when you need to recreate your dev environment).

I had to go through all the hoops and loops because the Proxmox Backup server is screwing my backups apparently.

I also noticed this huge "missing chunk" thing when GC is doing it's thing... and funny enough it's on the exact same VM this time also! Funny thing, I also do my back-ups locally on the same server and this time I got around with an older backup and I didn't had to go beg again.

Why is this happening? I attached the output of the GC. I have no errors on the drives and also... ZFS 3 way mirrored, scrubbed weekly and so on it should be super duper safe! Why do I have missing chunks?!

Worth noting that I do my backups weekly, I power up the back-up server 4 - 6 H before the backup event and power it off the next day (6 - 8 H after the backup is completed).

For me this completely ruins my trust in PBS... I used to have a simple Linux with SAMBA as a backup solution. I never had problems with SAMBA except that if the backup server is offline Proxmox would just dump stuff on / which is rather small thus causing other complications. PBS is at least detected and not mounted in some folder but accessed through it's API and this makes it better when the back-up server is not always online. Also the incremental backup is of tremendous help!

If I don't understand the problem so I can avoid it I will just have to do my back-ups locally and rsync them to NFS/SAMBA weekly and drop PBS altogether... I would very much like NOT to do this because I love Proxmox!

Thank you all in advance for any thoughts!

P.S.: Regarding Proxmox server itself... I've been using it since 6.x and constantly upgraded my hardware and software on the go since then without any problems! ZFS is as always rock solid, never lost one bit of info! It is completely baffling for me that I have this problem with PBS and it makes me think that I cannot trust my backups. It's completely wrong to do a backup, receive "all green" then when restoring it to have your VM thrashed out just because of this round trip...

Restoring looks like this on that specific VM:
Code:
ProxmoxBackup Server 2.1-5

()
2022-03-29T00:23:49+03:00: starting new backup reader datastore 'totem': "/rpool/totem"
2022-03-29T00:23:49+03:00: protocol upgrade done
2022-03-29T00:23:49+03:00: GET /download
2022-03-29T00:23:49+03:00: download "/rpool/totem/vm/610/2022-03-28T06:59:28Z/index.json.blob"
2022-03-29T00:23:49+03:00: GET /download
2022-03-29T00:23:49+03:00: download "/rpool/totem/vm/610/2022-03-28T06:59:28Z/drive-ide0.img.fidx"
2022-03-29T00:23:49+03:00: register chunks in 'drive-ide0.img.fidx' as downloadable.
2022-03-29T00:23:49+03:00: GET /chunk
2022-03-29T00:23:49+03:00: download chunk "/rpool/totem/.chunks/43bc/43bcce985c008a4a83d1baaa545913cfad887a28e8512e2430085310b12dd6af"
2022-03-29T00:23:49+03:00: GET /chunk
2022-03-29T00:23:49+03:00: download chunk "/rpool/totem/.chunks/1897/1897243719042b8f1915458acf3fe0b32b9a1e7c3fd961c17f17e168bfbb0b59"
2022-03-29T00:23:49+03:00: GET /chunk: 400 Bad Request: reading file "/rpool/totem/.chunks/1897/1897243719042b8f1915458acf3fe0b32b9a1e7c3fd961c17f17e168bfbb0b59" failed: No such file or directory (os error 2)
2022-03-29T00:23:49+03:00: reader finished successfully
2022-03-29T00:23:49+03:00: TASK OK
 

Attachments

Last edited:
well normally chunks don't go missing by themselves, they are only deleted during garbage collection.

how often do you do verify/gc/prune?

did you manually modify anything in the datastore? like moving/copying/deleting snapshots around, deleting files, etc?
how does your zpool look like? can you post the output of 'zfs get all' ?
 
Garbage collection: Daily
Prune schedule: Daily

I assume this is defaults? I don't remember touching them.
Also attached zfs get all, zpool status and zpool iostat -v outputs.

What I also have is a SAMBA on it that I use to throw other things from my Windows machine for Backup purposes but the share is a completely different folder on /rpool

I have 3 datastores, one for my router (Proxmox with OpenVPN VM and other LXCs), one for my home server and one for mail that's not used right now (I plan to use Proxmox Mail Gateway some time in the future this year).

L.E.:
1648540652144.png

Now GC fails each time apparently on this store (and apparently on that specific VM for some reason, the one that hurts the most :) )

However over email this is what's comming:
1648540797283.png
 

Attachments

Last edited:
ok looks normal sofar (besides the missing chunks)...

again did you anything manual in the datastore in the past? move/copy/delete/rsync/etc ?

also what is your pbs version? 'proxmox-backup-manager versions --verbose'
 
No manual change what so ever... the only "peculiar" thing that I do with it is starting it Friday evening and shutting it down Saturday morning by hand and sometimes I forget to do it and back-up fails... so I have to run them "by hand" on Saturday. And the SAMBA strapped on top that goes to a separate shared folder on rpool for "other manual backup purposes".

I keep all Proxmox instances up to date as best as I can.
 

Attachments

Oh, just remembered. I do run a command line back-up for some folders on the Proxmox server... and I do have a cron job executing that weekly, so it's not just the back-up job as defined from UI (can't define back-up jobs for folders from UI so I have to run a command line backup for that one).

Could this interfere?

Code:
PBS_PASSWORD=mypassword nohup proxmox-backup-client backup storage.pxar:/rpool/storage --repository IP:store &
 
Last edited:
ok that command should not interfere (anything that uses the pbs tools/api should be safe)

can you also post the samba config (just to be sure) and the output of 'mount' ?

i really don't see how those chunks could have gone missing without manual intervention..
 
The download button also fails:
1648556387268.png

1648556403770.png

So there's something fishy... I created another data store and moved the backup job to use the other one, just for VMs... I'm running the job now, will take a while and then I'll peek on the backups...
 
I just re-ran GC:
Code:
2022-03-29T10:59:05+03:00: starting garbage collection on store totem
2022-03-29T10:59:05+03:00: Start GC phase1 (mark used chunks)
2022-03-29T10:59:06+03:00: marked 7% (1 of 13 index files)
2022-03-29T10:59:15+03:00: marked 15% (2 of 13 index files)
2022-03-29T11:04:10+03:00: marked 23% (3 of 13 index files)
2022-03-29T11:04:10+03:00: marked 30% (4 of 13 index files)
2022-03-29T11:04:11+03:00: marked 38% (5 of 13 index files)
2022-03-29T11:04:11+03:00: WARN: warning: unable to access non-existent chunk 7475603cdd94e7bbd1840ff9d6634c249329c9feea9b8ec9c853c8c42b425cc7, required by "/rpool/totem/ct/200/2022-03-27T13:05:39Z/root.pxar.didx"
2022-03-29T11:04:11+03:00: WARN: warning: unable to access non-existent chunk bf0609b1a0443b1975e3b9ba48209d122e4db5d3334e3e96ea7f7b7be537f900, required by "/rpool/totem/ct/200/2022-03-27T13:05:39Z/root.pxar.didx"
2022-03-29T11:04:11+03:00: WARN: warning: unable to access non-existent chunk 8356ab648ddeb359a6b8925018f1e058983a8739f270d94c4cf1b0208949770e, required by "/rpool/totem/ct/200/2022-03-27T13:05:39Z/root.pxar.didx"
2022-03-29T11:04:11+03:00: WARN: warning: unable to access non-existent chunk c6d31b812d0eab13ee2c4bf58429565a90f4a43bbcb0baa2a66c5cbd6058e1ed, required by "/rpool/totem/ct/200/2022-03-27T13:05:39Z/root.pxar.didx"
2022-03-29T11:04:11+03:00: WARN: warning: unable to access non-existent chunk 291b38694b767a7730847c2c3eff70062a308f496d0e2477d2ec85bab7d266d2, required by "/rpool/totem/ct/200/2022-03-27T13:05:39Z/root.pxar.didx"
2022-03-29T11:04:11+03:00: WARN: warning: unable to access non-existent chunk 2717bc06f10c98c66ab741754587e8403ecc6eb551430d1975a51a0bbf9a6c80, required by "/rpool/totem/ct/200/2022-03-27T13:05:39Z/root.pxar.didx"
[...]

Numbers differ now... for some reason...

1648562646696.png
Also... less warnings.
 
is the pool 'rpool' just named accidentally or was that the 'root' sometime? did you move the datastore at some point?

The download button also fails:
thats normal, you are missing chunks of a backup, this snapshot is not recoverable since parts of it are missing...


Numbers differ now... for some reason...
there probably was a prune in between or you deleted snapshots and now there are less missing chunks because they are not referenced anymore?
 
The pool was always named rpool... I did no manual changes on snapshots or anything... that's why I don't understand what happened. Also, it appears that it's always the same VM for some reason, all the others seem safe.

A friend of mine has an expression describing this: "it broke on the shelf"
 
Code:
2022-03-29T15:24:50+03:00: verify group totem:vm/610 (1 snapshots)
2022-03-29T15:24:50+03:00: verify totem:vm/610/2022-03-28T06:59:28Z
2022-03-29T15:24:50+03:00:   check qemu-server.conf.blob
2022-03-29T15:24:50+03:00:   check drive-ide0.img.fidx
2022-03-29T17:19:27+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk 'b55d61e6f4d9717f77d0653c6508b62843af980a9066b4521e2b262ac147ccbc' - No such file or directory (os error 2)
2022-03-29T17:19:27+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk '0041257d333e9cd211e300e00a2154b11178b772892430fa6a1b00cc5d724c27' - No such file or directory (os error 2)
2022-03-29T17:19:27+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk 'f0c50fabf79a096724218baaccf9c1c27fdff7b68df3a5e92eda6abf12bab57b' - No such file or directory (os error 2)
2022-03-29T17:19:27+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk '94c8f1482eb4cb119882d0900ef113dccc3d4c3c6318a9639eee31139f30240d' - No such file or directory (os error 2)
2022-03-29T17:19:27+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk '26e3c0271f8b66723b874085e8602268100088bd21449f6c392e1e9bdccccb39' - No such file or directory (os error 2)
2022-03-29T17:19:27+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk '01e458fbdd687c05bccee22e8b24ff3644105bdab0caf8cc34f89b714bc5fded' - No such file or directory (os error 2)
2022-03-29T17:19:27+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk '4cb1b2eeb08678248b83f35d373adc8d50f89aa4b6275e087ec6fc4da2ef73c1' - No such file or directory (os error 2)
2022-03-29T17:19:27+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk 'ec16718e45b31691da04366eb0060c52b946a8fa9a1d85f054c3d41b3009f655' - No such file or directory (os error 2)
2022-03-29T17:19:27+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk '57a9e40a3c738b0413bbb75aeeaaf0cbf2237cb6a358cbcc369120cff191f611' - No such file or directory (os error 2)
[...]
2022-03-29T17:27:04+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk '1952d8d2c75e181de1cd002ef4e552a1662f293749786ed231a147747c3a30f6' - No such file or directory (os error 2)
2022-03-29T17:27:04+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk 'caaafec022581665df06b32ee9c265353aeadbe21aeb27fc1662e60fc0cfcbac' - No such file or directory (os error 2)
2022-03-29T17:27:05+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk 'dd2eb1d5194c6a934984c8665a23c5249b417631448df0c69817298360e7bbf9' - No such file or directory (os error 2)
2022-03-29T17:27:05+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk 'ffca0447f08dc48dc0e18c167223bfb570fc04c59f04379a635a12dc69fd99c0' - No such file or directory (os error 2)
2022-03-29T17:27:05+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk '3c5ae8273dff7fb193f8c917d23da12e7933d00a7952a5593fad4ce9f587e495' - No such file or directory (os error 2)
2022-03-29T17:27:05+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk '98bdbdf04d2c134a4c232ad2cdd105457cde6e6c0bed67d47640f45c4888ed8d' - No such file or directory (os error 2)
2022-03-29T17:27:05+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk 'c6a912b79095b147395cdeca0be455f68f849d01f171e5efae31acf71532fcae' - No such file or directory (os error 2)
2022-03-29T17:27:05+03:00: can't verify chunk, load failed - store 'totem', unable to load chunk 'f08dc3a34a2feb2c04f0cbde0c75f39bd1b723968616a0d7263c605bde9785fc' - No such file or directory (os error 2)
2022-03-29T17:27:05+03:00:   verified 37964.23/77780.00 MiB in 7334.71 seconds, speed 5.18/10.60 MiB/s (24221 errors)
2022-03-29T17:27:05+03:00: verify totem:vm/610/2022-03-28T06:59:28Z/drive-ide0.img.fidx failed: chunks could not be verified
2022-03-29T17:27:05+03:00: percentage done: 100.00% (1/1 snapshots)
2022-03-29T17:27:05+03:00: Failed to verify the following snapshots/groups:
2022-03-29T17:27:05+03:00:     vm/610/2022-03-28T06:59:28Z
2022-03-29T17:27:05+03:00: TASK ERROR: verification failed - please check the log for details
 
Created a different data store, ran the backup there. All seems fine now (verified the back-ups, no missing chunks). Meanwhile I'll erase the offending data store and see if I recreate it what happens.
 
I recreated all my data stores, no more missing chunks for now. Will come back if things go haywire again. Meanwhile I consider this matter closed for now and I will use other backup means as well until I can trust my PBS machine.
 
Aaand after a good while after re-creating the store and recreating all the backups this happens:
1649515836645.png

Same troublesome VM.

The picture from PROXMOX's side:
INFO: Starting Backup of VM 610 (qemu) INFO: Backup started at 2022-04-09 00:31:21 INFO: status = running INFO: VM Name: win10dev1-vm-wm INFO: include disk 'ide0' 'containers:610/vm-610-disk-0.qcow2' 256G INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating Proxmox Backup Server archive 'vm/610/2022-04-08T21:31:21Z' INFO: skipping guest-agent 'fs-freeze', agent configured but not running? INFO: started backup task '81b04508-2d1e-4870-ae99-da0e2316f233' INFO: resuming VM again INFO: ide0: dirty-bitmap status: OK (54.8 GiB of 256.0 GiB dirty) INFO: using fast incremental mode (dirty-bitmap), 54.8 GiB dirty of 256.0 GiB total INFO: 0% (372.0 MiB of 54.8 GiB) in 3s, read: 124.0 MiB/s, write: 122.7 MiB/s INFO: 1% (616.0 MiB of 54.8 GiB) in 6s, read: 81.3 MiB/s, write: 81.3 MiB/s INFO: 2% (1.1 GiB of 54.8 GiB) in 12s, read: 90.0 MiB/s, write: 90.0 MiB/s INFO: 3% (1.7 GiB of 54.8 GiB) in 19s, read: 88.6 MiB/s, write: 88.6 MiB/s INFO: 4% (2.2 GiB of 54.8 GiB) in 26s, read: 69.1 MiB/s, write: 69.1 MiB/s INFO: 5% (2.7 GiB of 54.8 GiB) in 33s, read: 78.3 MiB/s, write: 78.3 MiB/s INFO: 6% (3.3 GiB of 54.8 GiB) in 40s, read: 85.7 MiB/s, write: 85.7 MiB/s INFO: 7% (3.9 GiB of 54.8 GiB) in 46s, read: 92.7 MiB/s, write: 92.7 MiB/s INFO: 8% (4.5 GiB of 54.8 GiB) in 57s, read: 56.0 MiB/s, write: 56.0 MiB/s INFO: 9% (5.1 GiB of 54.8 GiB) in 1m 3s, read: 99.3 MiB/s, write: 99.3 MiB/s INFO: 10% (5.5 GiB of 54.8 GiB) in 1m 8s, read: 96.0 MiB/s, write: 96.0 MiB/s INFO: 11% (6.0 GiB of 54.8 GiB) in 1m 14s, read: 88.7 MiB/s, write: 88.7 MiB/s INFO: 12% (6.7 GiB of 54.8 GiB) in 1m 26s, read: 58.0 MiB/s, write: 58.0 MiB/s INFO: 13% (7.2 GiB of 54.8 GiB) in 1m 32s, read: 87.3 MiB/s, write: 87.3 MiB/s INFO: 14% (7.7 GiB of 54.8 GiB) in 1m 37s, read: 97.6 MiB/s, write: 97.6 MiB/s INFO: 15% (8.3 GiB of 54.8 GiB) in 1m 46s, read: 72.0 MiB/s, write: 72.0 MiB/s INFO: 16% (8.8 GiB of 54.8 GiB) in 1m 50s, read: 111.0 MiB/s, write: 111.0 MiB/s INFO: 17% (9.4 GiB of 54.8 GiB) in 1m 57s, read: 89.1 MiB/s, write: 89.1 MiB/s INFO: 18% (9.9 GiB of 54.8 GiB) in 2m 3s, read: 92.7 MiB/s, write: 92.7 MiB/s INFO: 19% (10.5 GiB of 54.8 GiB) in 2m 10s, read: 78.9 MiB/s, write: 78.9 MiB/s INFO: 20% (11.0 GiB of 54.8 GiB) in 2m 20s, read: 53.2 MiB/s, write: 53.2 MiB/s INFO: 21% (11.6 GiB of 54.8 GiB) in 2m 28s, read: 82.5 MiB/s, write: 82.5 MiB/s INFO: 22% (12.2 GiB of 54.8 GiB) in 2m 33s, read: 111.2 MiB/s, write: 111.2 MiB/s INFO: 23% (12.7 GiB of 54.8 GiB) in 2m 39s, read: 85.3 MiB/s, write: 85.3 MiB/s INFO: 24% (13.2 GiB of 54.8 GiB) in 2m 45s, read: 83.3 MiB/s, write: 83.3 MiB/s INFO: 25% (13.8 GiB of 54.8 GiB) in 2m 52s, read: 88.6 MiB/s, write: 88.6 MiB/s INFO: 26% (14.3 GiB of 54.8 GiB) in 2m 58s, read: 94.0 MiB/s, write: 94.0 MiB/s INFO: 27% (14.8 GiB of 54.8 GiB) in 3m 7s, read: 54.7 MiB/s, write: 54.7 MiB/s INFO: 28% (15.6 GiB of 54.8 GiB) in 3m 16s, read: 86.2 MiB/s, write: 86.2 MiB/s INFO: 29% (15.9 GiB of 54.8 GiB) in 3m 20s, read: 95.0 MiB/s, write: 95.0 MiB/s INFO: 30% (16.5 GiB of 54.8 GiB) in 3m 27s, read: 86.3 MiB/s, write: 86.3 MiB/s INFO: 31% (17.0 GiB of 54.8 GiB) in 3m 32s, read: 98.4 MiB/s, write: 98.4 MiB/s INFO: 32% (17.6 GiB of 54.8 GiB) in 3m 38s, read: 100.0 MiB/s, write: 100.0 MiB/s INFO: 33% (18.3 GiB of 54.8 GiB) in 3m 48s, read: 74.4 MiB/s, write: 74.4 MiB/s INFO: 34% (18.7 GiB of 54.8 GiB) in 3m 52s, read: 90.0 MiB/s, write: 90.0 MiB/s INFO: 35% (19.2 GiB of 54.8 GiB) in 3m 59s, read: 76.6 MiB/s, write: 76.6 MiB/s INFO: 36% (19.8 GiB of 54.8 GiB) in 4m 6s, read: 82.9 MiB/s, write: 82.9 MiB/s INFO: 37% (20.3 GiB of 54.8 GiB) in 4m 16s, read: 56.0 MiB/s, write: 56.0 MiB/s INFO: 38% (20.9 GiB of 54.8 GiB) in 4m 23s, read: 82.9 MiB/s, write: 82.9 MiB/s INFO: 39% (21.5 GiB of 54.8 GiB) in 4m 30s, read: 86.9 MiB/s, write: 86.9 MiB/s INFO: 40% (22.0 GiB of 54.8 GiB) in 4m 38s, read: 73.0 MiB/s, write: 73.0 MiB/s INFO: 41% (22.5 GiB of 54.8 GiB) in 4m 44s, read: 84.7 MiB/s, write: 84.7 MiB/s INFO: 42% (23.1 GiB of 54.8 GiB) in 4m 51s, read: 81.7 MiB/s, write: 81.7 MiB/s INFO: 43% (23.6 GiB of 54.8 GiB) in 4m 56s, read: 100.0 MiB/s, write: 100.0 MiB/s INFO: 44% (24.2 GiB of 54.8 GiB) in 5m 3s, read: 86.3 MiB/s, write: 86.3 MiB/s INFO: 45% (24.7 GiB of 54.8 GiB) in 5m 9s, read: 91.3 MiB/s, write: 91.3 MiB/s INFO: 46% (25.2 GiB of 54.8 GiB) in 5m 17s, read: 69.0 MiB/s, write: 69.0 MiB/s INFO: 47% (25.8 GiB of 54.8 GiB) in 5m 24s, read: 85.7 MiB/s, write: 85.7 MiB/s INFO: 48% (26.4 GiB of 54.8 GiB) in 5m 32s, read: 72.0 MiB/s, write: 72.0 MiB/s INFO: 49% (26.9 GiB of 54.8 GiB) in 5m 39s, read: 79.4 MiB/s, write: 79.4 MiB/s INFO: 50% (27.4 GiB of 54.8 GiB) in 5m 45s, read: 86.7 MiB/s, write: 86.7 MiB/s INFO: 51% (28.0 GiB of 54.8 GiB) in 5m 58s, read: 45.8 MiB/s, write: 45.8 MiB/s INFO: 52% (28.5 GiB of 54.8 GiB) in 6m 4s, read: 82.7 MiB/s, write: 82.7 MiB/s INFO: 53% (29.1 GiB of 54.8 GiB) in 6m 10s, read: 94.0 MiB/s, write: 94.0 MiB/s INFO: 54% (29.6 GiB of 54.8 GiB) in 6m 18s, read: 72.0 MiB/s, write: 72.0 MiB/s INFO: 55% (30.1 GiB of 54.8 GiB) in 6m 31s, read: 41.8 MiB/s, write: 41.8 MiB/s INFO: 56% (30.9 GiB of 54.8 GiB) in 6m 41s, read: 77.6 MiB/s, write: 77.6 MiB/s INFO: 57% (31.4 GiB of 54.8 GiB) in 6m 48s, read: 73.7 MiB/s, write: 73.7 MiB/s INFO: 58% (31.8 GiB of 54.8 GiB) in 6m 54s, read: 71.3 MiB/s, write: 71.3 MiB/s INFO: 59% (32.4 GiB of 54.8 GiB) in 7m 7s, read: 42.2 MiB/s, write: 42.2 MiB/s INFO: 60% (32.9 GiB of 54.8 GiB) in 7m 13s, read: 98.0 MiB/s, write: 98.0 MiB/s INFO: 61% (33.5 GiB of 54.8 GiB) in 7m 21s, read: 71.0 MiB/s, write: 71.0 MiB/s INFO: 62% (34.0 GiB of 54.8 GiB) in 7m 30s, read: 56.4 MiB/s, write: 56.4 MiB/s INFO: 63% (34.5 GiB of 54.8 GiB) in 7m 41s, read: 52.0 MiB/s, write: 52.0 MiB/s INFO: 64% (35.2 GiB of 54.8 GiB) in 7m 54s, read: 51.7 MiB/s, write: 51.7 MiB/s INFO: 65% (35.6 GiB of 54.8 GiB) in 8m 5s, read: 40.0 MiB/s, write: 40.0 MiB/s INFO: 66% (36.2 GiB of 54.8 GiB) in 8m 10s, read: 110.4 MiB/s, write: 110.4 MiB/s INFO: 67% (36.7 GiB of 54.8 GiB) in 8m 20s, read: 55.6 MiB/s, write: 55.6 MiB/s INFO: 68% (37.3 GiB of 54.8 GiB) in 8m 36s, read: 36.2 MiB/s, write: 36.2 MiB/s INFO: 69% (37.8 GiB of 54.8 GiB) in 8m 41s, read: 108.8 MiB/s, write: 108.8 MiB/s INFO: 70% (38.5 GiB of 54.8 GiB) in 8m 58s, read: 38.8 MiB/s, write: 38.8 MiB/s INFO: 71% (39.1 GiB of 54.8 GiB) in 9m 4s, read: 102.7 MiB/s, write: 80.7 MiB/s INFO: 72% (39.5 GiB of 54.8 GiB) in 9m 8s, read: 107.0 MiB/s, write: 107.0 MiB/s INFO: 73% (40.1 GiB of 54.8 GiB) in 9m 18s, read: 63.2 MiB/s, write: 63.2 MiB/s INFO: 74% (40.6 GiB of 54.8 GiB) in 9m 27s, read: 53.8 MiB/s, write: 53.8 MiB/s INFO: 75% (41.2 GiB of 54.8 GiB) in 9m 39s, read: 56.7 MiB/s, write: 56.7 MiB/s INFO: 76% (41.7 GiB of 54.8 GiB) in 9m 43s, read: 111.0 MiB/s, write: 111.0 MiB/s INFO: 77% (42.2 GiB of 54.8 GiB) in 9m 54s, read: 52.0 MiB/s, write: 52.0 MiB/s INFO: 78% (42.8 GiB of 54.8 GiB) in 10m 10s, read: 33.8 MiB/s, write: 33.8 MiB/s INFO: 79% (43.4 GiB of 54.8 GiB) in 10m 15s, read: 135.2 MiB/s, write: 135.2 MiB/s INFO: 80% (43.9 GiB of 54.8 GiB) in 10m 21s, read: 75.3 MiB/s, write: 75.3 MiB/s INFO: 81% (44.4 GiB of 54.8 GiB) in 10m 40s, read: 29.1 MiB/s, write: 29.1 MiB/s INFO: 82% (45.0 GiB of 54.8 GiB) in 10m 44s, read: 162.0 MiB/s, write: 162.0 MiB/s INFO: 83% (45.5 GiB of 54.8 GiB) in 10m 51s, read: 73.1 MiB/s, write: 73.1 MiB/s INFO: 84% (46.2 GiB of 54.8 GiB) in 11m 8s, read: 40.9 MiB/s, write: 40.9 MiB/s INFO: 85% (46.7 GiB of 54.8 GiB) in 11m 15s, read: 69.7 MiB/s, write: 69.7 MiB/s INFO: 86% (47.2 GiB of 54.8 GiB) in 11m 21s, read: 84.7 MiB/s, write: 84.7 MiB/s INFO: 87% (47.7 GiB of 54.8 GiB) in 11m 30s, read: 57.8 MiB/s, write: 57.8 MiB/s INFO: 88% (48.2 GiB of 54.8 GiB) in 11m 39s, read: 63.6 MiB/s, write: 63.6 MiB/s INFO: 89% (48.8 GiB of 54.8 GiB) in 11m 49s, read: 58.8 MiB/s, write: 58.8 MiB/s INFO: 90% (49.3 GiB of 54.8 GiB) in 12m 2s, read: 41.8 MiB/s, write: 41.8 MiB/s INFO: 91% (50.0 GiB of 54.8 GiB) in 12m 10s, read: 77.5 MiB/s, write: 77.5 MiB/s INFO: 92% (50.7 GiB of 54.8 GiB) in 12m 21s, read: 72.4 MiB/s, write: 72.4 MiB/s INFO: 93% (51.2 GiB of 54.8 GiB) in 12m 27s, read: 77.3 MiB/s, write: 77.3 MiB/s INFO: 94% (51.6 GiB of 54.8 GiB) in 12m 38s, read: 36.7 MiB/s, write: 36.7 MiB/s INFO: 95% (52.2 GiB of 54.8 GiB) in 12m 47s, read: 70.2 MiB/s, write: 70.2 MiB/s INFO: 96% (52.7 GiB of 54.8 GiB) in 12m 56s, read: 60.0 MiB/s, write: 60.0 MiB/s INFO: 97% (53.3 GiB of 54.8 GiB) in 13m 6s, read: 60.4 MiB/s, write: 60.4 MiB/s INFO: 98% (53.9 GiB of 54.8 GiB) in 13m 16s, read: 59.2 MiB/s, write: 59.2 MiB/s INFO: 99% (54.6 GiB of 54.8 GiB) in 13m 27s, read: 67.6 MiB/s, write: 67.6 MiB/s INFO: 100% (54.8 GiB of 54.8 GiB) in 13m 30s, read: 60.0 MiB/s, write: 60.0 MiB/s INFO: backup is sparse: 132.00 MiB (0%) total zero data INFO: backup was done incrementally, reused 201.34 GiB (78%) INFO: transferred 54.79 GiB in 813 seconds (69.0 MiB/s) INFO: prune older backups with retention: keep-last=10 INFO: running 'proxmox-backup-client prune' for 'vm/610' INFO: pruned 0 backup(s) INFO: Finished Backup of VM 610 (00:13:39) INFO: Backup finished at 2022-04-09 00:45:00 -------------------- INFO: Starting Backup of VM 611 (qemu) INFO: Backup started at 2022-04-09 00:45:00 INFO: status = stopped INFO: backup mode: stop INFO: ionice priority: 7 INFO: VM Name: winXPdev1-vm-wm INFO: include disk 'ide0' 'containers:611/vm-611-disk-0.qcow2' 64G INFO: creating Proxmox Backup Server archive 'vm/611/2022-04-08T21:45:00Z' INFO: starting kvm to execute backup task INFO: started backup task '0cd3d2c0-4c3b-42d5-a18c-d0cf0e2995a7' ERROR: VM 611 qmp command 'query-pbs-bitmap-info' failed - got timeout INFO: aborting backup job INFO: stopping kvm after backup task trying to acquire lock... OK ERROR: Backup of VM 611 failed - VM 611 qmp command 'query-pbs-bitmap-info' failed - got timeout INFO: Failed at 2022-04-09 00:45:11

Running verification for missing chunks now...
 
Last edited:
Trying to do a backup of that VM concludes in this message consistently now:
Task viewer: VM/CT 611 - Backup OutputStatus Stop INFO: starting new backup job: vzdump 611 --mode snapshot --remove 0 --storage backup.totem-vm --node totem INFO: Starting Backup of VM 611 (qemu) INFO: Backup started at 2022-04-09 17:54:55 INFO: status = stopped INFO: backup mode: stop INFO: ionice priority: 7 INFO: VM Name: winXPdev1-vm-wm INFO: include disk 'ide0' 'containers:611/vm-611-disk-0.qcow2' 64G INFO: creating Proxmox Backup Server archive 'vm/611/2022-04-09T14:54:55Z' INFO: starting kvm to execute backup task INFO: started backup task '7285d8b0-020d-4c52-b881-c7940655aca5' ERROR: VM 611 qmp command 'query-pbs-bitmap-info' failed - got timeout INFO: aborting backup job INFO: stopping kvm after backup task trying to acquire lock... OK ERROR: Backup of VM 611 failed - VM 611 qmp command 'query-pbs-bitmap-info' failed - got timeout INFO: Failed at 2022-04-09 17:55:06 INFO: Backup job finished with errors TASK ERROR: job errors
 
Proxmox syslog:
Apr 09 17:54:55 totem pvedaemon[565643]: INFO: starting new backup job: vzdump 611 --mode snapshot --remove 0 --storage backup.totem-vm --node totem Apr 09 17:54:55 totem pvedaemon[565643]: INFO: Starting Backup of VM 611 (qemu) Apr 09 17:54:55 totem systemd[1]: Started 611.scope. Apr 09 17:54:55 totem systemd-udevd[565719]: Using default interface naming scheme 'v247'. Apr 09 17:54:55 totem systemd-udevd[565719]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Apr 09 17:54:56 totem kernel: device tap611i0 entered promiscuous mode Apr 09 17:54:56 totem systemd-udevd[565718]: Using default interface naming scheme 'v247'. Apr 09 17:54:56 totem systemd-udevd[565718]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Apr 09 17:54:56 totem systemd-udevd[565719]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Apr 09 17:54:56 totem systemd-udevd[565718]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Apr 09 17:54:56 totem kernel: vmbr0: port 7(fwpr611p0) entered blocking state Apr 09 17:54:56 totem kernel: vmbr0: port 7(fwpr611p0) entered disabled state Apr 09 17:54:56 totem kernel: device fwpr611p0 entered promiscuous mode Apr 09 17:54:56 totem kernel: vmbr0: port 7(fwpr611p0) entered blocking state Apr 09 17:54:56 totem kernel: vmbr0: port 7(fwpr611p0) entered forwarding state Apr 09 17:54:56 totem kernel: fwbr611i0: port 1(fwln611i0) entered blocking state Apr 09 17:54:56 totem kernel: fwbr611i0: port 1(fwln611i0) entered disabled state Apr 09 17:54:56 totem kernel: device fwln611i0 entered promiscuous mode Apr 09 17:54:56 totem kernel: fwbr611i0: port 1(fwln611i0) entered blocking state Apr 09 17:54:56 totem kernel: fwbr611i0: port 1(fwln611i0) entered forwarding state Apr 09 17:54:56 totem kernel: fwbr611i0: port 2(tap611i0) entered blocking state Apr 09 17:54:56 totem kernel: fwbr611i0: port 2(tap611i0) entered disabled state Apr 09 17:54:56 totem kernel: fwbr611i0: port 2(tap611i0) entered blocking state Apr 09 17:54:56 totem kernel: fwbr611i0: port 2(tap611i0) entered forwarding state Apr 09 17:54:59 totem pvedaemon[565643]: VM 611 qmp command failed - VM 611 qmp command 'query-pbs-bitmap-info' failed - got timeout Apr 09 17:55:03 totem pvedaemon[556218]: VM 611 qmp command failed - VM 611 qmp command 'query-proxmox-support' failed - got timeout Apr 09 17:55:05 totem QEMU[565713]: BackupTask send abort failed. Apr 09 17:55:05 totem kernel: fwbr611i0: port 2(tap611i0) entered disabled state Apr 09 17:55:05 totem kernel: fwbr611i0: port 1(fwln611i0) entered disabled state Apr 09 17:55:05 totem kernel: vmbr0: port 7(fwpr611p0) entered disabled state Apr 09 17:55:05 totem kernel: device fwln611i0 left promiscuous mode Apr 09 17:55:05 totem kernel: fwbr611i0: port 1(fwln611i0) entered disabled state Apr 09 17:55:05 totem kernel: device fwpr611p0 left promiscuous mode Apr 09 17:55:05 totem kernel: vmbr0: port 7(fwpr611p0) entered disabled state Apr 09 17:55:06 totem qmeventd[2151]: read: Connection reset by peer Apr 09 17:55:06 totem systemd[1]: 611.scope: Succeeded. Apr 09 17:55:06 totem systemd[1]: 611.scope: Consumed 2.235s CPU time. Apr 09 17:55:06 totem qmeventd[567190]: Starting cleanup for 611 Apr 09 17:55:06 totem qmeventd[567190]: trying to acquire lock... Apr 09 17:55:06 totem qmeventd[567190]: OK Apr 09 17:55:06 totem qmeventd[567190]: Finished cleanup for 611 Apr 09 17:55:06 totem pvedaemon[565643]: ERROR: Backup of VM 611 failed - VM 611 qmp command 'query-pbs-bitmap-info' failed - got timeout Apr 09 17:55:06 totem pvedaemon[565643]: INFO: Backup job finished with errors Apr 09 17:55:06 totem pvedaemon[565643]: job errors Apr 09 17:55:06 totem pvedaemon[593109]: <root@pam> end task UPID:totem:0008A18B:0629EE21:62519E3F:vzdump:611:root@pam: job errors

PBS syslog:
Apr 09 17:54:56 big-bk proxmox-backup-proxy[1268]: starting new backup on datastore 'totem-vm': "vm/611/2022-04-09T14:54:55Z" Apr 09 17:54:56 big-bk proxmox-backup-proxy[1268]: download 'index.json.blob' from previous backup. Apr 09 17:54:56 big-bk proxmox-backup-proxy[1268]: register chunks in 'drive-ide0.img.fidx' from previous backup. Apr 09 17:54:56 big-bk proxmox-backup-proxy[1268]: download 'drive-ide0.img.fidx' from previous backup. Apr 09 17:54:56 big-bk proxmox-backup-proxy[1268]: created new fixed index 1 ("vm/611/2022-04-09T14:54:55Z/drive-ide0.img.fidx") Apr 09 17:54:56 big-bk proxmox-backup-proxy[1268]: add blob "/rpool/totem-vm/vm/611/2022-04-09T14:54:55Z/qemu-server.conf.blob" (381 bytes, comp: 381) Apr 09 17:55:05 big-bk proxmox-backup-proxy[1268]: backup ended and finish failed: backup ended but finished flag is not set. Apr 09 17:55:05 big-bk proxmox-backup-proxy[1268]: removing unfinished backup Apr 09 17:55:05 big-bk proxmox-backup-proxy[1268]: removing backup snapshot "/rpool/totem-vm/vm/611/2022-04-09T14:54:55Z" Apr 09 17:55:05 big-bk proxmox-backup-proxy[1268]: TASK ERROR: backup ended but finished flag is not set.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!