vm backup on two datastores not possible?

Sycoriorz

Well-Known Member
Mar 19, 2018
45
4
48
37
Dear at all,

one questions.
Is it possible to do VMx to backup to PBS-Datastore ONE and TWO.

I have got some issues where i believe that this occurs only on machines that i try to backup on more than one datastore.

2021-10-11T22:30:03+02:00: starting new backup on datastore 'PBS3': "vm/104/2021-10-11T20:30:01Z"
2021-10-11T22:30:03+02:00: download 'index.json.blob' from previous backup.
2021-10-11T22:30:04+02:00: register chunks in 'drive-scsi0.img.fidx' from previous backup.
2021-10-11T22:30:04+02:00: download 'drive-scsi0.img.fidx' from previous backup.
2021-10-11T22:30:04+02:00: created new fixed index 1 ("vm/104/2021-10-11T20:30:01Z/drive-scsi0.img.fidx")
2021-10-11T22:30:10+02:00: add blob "/mnt/datastore/PBS3/vm/104/2021-10-11T20:30:01Z/qemu-server.conf.blob" (359 bytes, comp: 359)
2021-10-11T22:37:50+02:00: POST /fixed_chunk: 400 Bad Request: creating temporary chunk on store 'PBS3' failed for
2021-10-11T22:37:51+02:00: backup ended and finish failed: backup ended but finished flag is not set.
2021-10-11T22:37:51+02:00: removing unfinished backup
2021-10-11T22:37:51+02:00: POST /fixed_chunk: 400 Bad Request: error reading a body from connection: protocol error: stream no longer needed
2021-10-11T22:37:51+02:00: TASK ERROR: backup ended but finished flag is not set.
2021-10-11T22:37:51+02:00: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
2021-10-11T22:37:51+02:00: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
2021-10-11T22:37:51+02:00: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
2021-10-11T22:37:51+02:00: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
2021-10-11T22:37:51+02:00: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
2021-10-11T22:37:51+02:00: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.

Many thanks for reply

best regards
 
hi,

what do you mean with " backup on two datastores not possible" ?

do you mean that you want to make a backup to datastore "A" first and after that to datastore "B" ?
if yes, that should work without problems (though dirty-bitmap will not work),
but your tasklog is weird ("Bad Request: creating temporary chunk on store 'PBS3' failed for")

can you post more of the task logs and your configuration of pve/pbs ?
 
Hi,

many thanks for your response.
what do you mean with " backup on two datastores not possible" ?

do you mean that you want to make a backup to datastore "A" first and after that to datastore "B" ?
if yes, that should work without problems (though dirty-bitmap will not work),
yes this is what i want to do

Task viewer output

2021-10-15T01:33:13+02:00: starting new backup on datastore 'PBS3': "vm/116/2021-10-14T23:33:12Z"
2021-10-15T01:33:13+02:00: download 'index.json.blob' from previous backup.
2021-10-15T01:33:14+02:00: register chunks in 'drive-scsi0.img.fidx' from previous backup.
2021-10-15T01:33:14+02:00: download 'drive-scsi0.img.fidx' from previous backup.
2021-10-15T01:33:14+02:00: created new fixed index 1 ("vm/116/2021-10-14T23:33:12Z/drive-scsi0.img.fidx")
2021-10-15T01:33:17+02:00: add blob "/mnt/datastore/PBS3/vm/116/2021-10-14T23:33:12Z/qemu-server.conf.blob" (370 bytes, comp: 370)
2021-10-15T01:33:35+02:00: POST /fixed_chunk: 400 Bad Request: creating temporary chunk on store 'PBS3' failed for 7812b20b78f18ee07ef9b34d1b9f015f4091e65360714feba79f829d57ac6248 - Invalid exchange (os error 52)
2021-10-15T01:33:35+02:00: backup ended and finish failed: backup ended but finished flag is not set.
2021-10-15T01:33:35+02:00: removing unfinished backup
2021-10-15T01:33:35+02:00: TASK ERROR: backup ended but finished flag is not set.
2021-10-15T01:33:35+02:00: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
2021-10-15T01:33:35+02:00: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.

proxmox-backup: 2.0-1 (running kernel: 5.11.22-4-pve)
proxmox-backup-server: 2.0.11-1 (running version: 2.0.11) pve-kernel-helper: 7.1-2 pve-kernel-5.11: 7.0-8 pve-kernel-5.11.22-5-pve: 5.11.22-10 pve-kernel-5.11.22-4-pve: 5.11.22-9 pve-kernel-5.11.22-1-pve: 5.11.22-2 ifupdown2: 3.1.0-1+pmx3 libjs-extjs: 7.0.0-1 proxmox-backup-docs: 2.0.11-1 proxmox-backup-client: 2.0.11-1 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.3-6 pve-xtermjs: 4.12.0-1 smartmontools: 7.2-1 zfsutils-linux: 2.0.5-pve1

Package PVE
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve) pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e) pve-kernel-helper: 7.1-2 pve-kernel-5.11: 7.0-8 pve-kernel-5.4: 6.4-5 pve-kernel-5.11.22-5-pve: 5.11.22-10 pve-kernel-5.11.22-4-pve: 5.11.22-9 pve-kernel-5.4.128-1-pve: 5.4.128-2 pve-kernel-5.4.73-1-pve: 5.4.73-1 ceph: 15.2.14-pve1 ceph-fuse: 15.2.14-pve1 corosync: 3.1.5-pve1 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown: not correctly installed ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.22-pve1 libproxmox-acme-perl: 1.3.0 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.0-4 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.0-9 libpve-guest-common-perl: 4.0-2 libpve-http-server-perl: 4.0-2 libpve-storage-perl: 7.0-11 libqb0: 1.0.5-1 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.9-4 lxcfs: 4.0.8-pve2 novnc-pve: 1.2.0-3 proxmox-backup-client: 2.0.9-2 proxmox-backup-file-restore: 2.0.9-2 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.3-6 pve-cluster: 7.0-3 pve-container: 4.0-10 pve-docs: 7.0-5 pve-edk2-firmware: 3.20200531-1 pve-firewall: 4.2-3 pve-firmware: 3.3-2 pve-ha-manager: 3.3-1 pve-i18n: 2.5-1 pve-qemu-kvm: 6.0.0-4 pve-xtermjs: 4.12.0-1 qemu-server: 7.0-14 smartmontools: 7.2-pve2 spiceterm: 3.2-2 vncterm: 1.7-1 zfsutils-linux: 2.0.5-pve1

Task Info out of PVE
INFO: trying to get global lock - waiting...
INFO: got global lock
INFO: starting new backup job: vzdump 102 105 112 107 121 --storage PBS-DS3 --quiet 1 --mode snapshot --mailnotification always
INFO: skip external VMs: 102, 105, 121
INFO: Starting Backup of VM 107 (qemu)
INFO: Backup started at 2021-10-14 22:33:32
INFO: status = running
INFO: VM Name: PTServer
INFO: include disk 'ide0' 'NVME-POOL:vm-107-disk-0' 250G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/107/2021-10-14T20:33:32Z'
INFO: started backup task '9dfaa44f-efda-4261-97ba-24919e649d2c'
INFO: resuming VM again
INFO: ide0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: 0% (1.0 GiB of 250.0 GiB) in 3s, read: 341.3 MiB/s, write: 149.3 MiB/s
INFO: 1% (2.6 GiB of 250.0 GiB) in 15s, read: 137.3 MiB/s, write: 109.7 MiB/s
INFO: 2% (5.5 GiB of 250.0 GiB) in 27s, read: 246.3 MiB/s, write: 137.7 MiB/s
INFO: 3% (7.5 GiB of 250.0 GiB) in 31s, read: 519.0 MiB/s, write: 317.0 MiB/s
INFO: 4% (10.0 GiB of 250.0 GiB) in 59s, read: 92.0 MiB/s, write: 89.3 MiB/s
INFO: 5% (12.6 GiB of 250.0 GiB) in 1m 51s, read: 51.2 MiB/s, write: 37.2 MiB/s
INFO: 6% (15.0 GiB of 250.0 GiB) in 3m 24s, read: 26.2 MiB/s, write: 22.7 MiB/s
INFO: 7% (17.7 GiB of 250.0 GiB) in 5m 1s, read: 28.4 MiB/s, write: 22.1 MiB/s
INFO: 8% (20.1 GiB of 250.0 GiB) in 6m 6s, read: 38.3 MiB/s, write: 33.3 MiB/s
INFO: 9% (22.7 GiB of 250.0 GiB) in 7m 12s, read: 39.2 MiB/s, write: 29.8 MiB/s
INFO: 10% (25.1 GiB of 250.0 GiB) in 7m 51s, read: 63.7 MiB/s, write: 37.0 MiB/s
INFO: 11% (27.7 GiB of 250.0 GiB) in 8m 46s, read: 47.9 MiB/s, write: 32.7 MiB/s
INFO: 11% (27.9 GiB of 250.0 GiB) in 8m 51s, read: 56.8 MiB/s, write: 42.4 MiB/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: creating temporary chunk on store 'PBS3' failed for 78121f57e64b90c1df8621ba18e285f2c1a0fe4b49fe3fde23511130bc58259c - Invalid exchange (os error 52)
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 107 failed - backup write data failed: command error: write_data upload error: pipelined request failed: creating temporary chunk on store 'PBS3' failed for 78121f57e64b90c1df8621ba18e285f2c1a0fe4b49fe3fde23511130bc58259c - Invalid exchange (os error 52)
INFO: Failed at 2021-10-14 22:42:24
INFO: Starting Backup of VM 112 (lxc)
INFO: Backup started at 2021-10-14 22:42:24
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: InfluxDB
INFO: including mount point rootfs ('/') in backup
INFO: creating Proxmox Backup Server archive 'ct/112/2021-10-14T20:42:24Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp2805081_112/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 112 --backup-time 1634244144 --repository root@pam@10.10.1.13:PBS3
INFO: Starting backup: ct/112/2021-10-14T20:42:24Z
INFO: Client name: pve3
INFO: Starting backup protocol: Thu Oct 14 22:42:24 2021
INFO: Downloading previous manifest (Wed Oct 13 22:30:59 2021)
INFO: Upload config file '/var/tmp/vzdumptmp2805081_112/etc/vzdump/pct.conf' to 'root@pam@10.10.1.13:8007:PBS3' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@10.10.1.13:8007:PBS3' as root.pxar.didx
INFO: root.pxar: had to backup 0 B of 1.13 GiB (compressed 0 B) in 8.88s
INFO: root.pxar: average backup speed: 0 B/s
INFO: root.pxar: backup was done incrementally, reused 1.13 GiB (100.0%)
INFO: Uploaded backup catalog (467.64 KiB)
INFO: Duration: 10.23s
INFO: End Time: Thu Oct 14 22:42:34 2021
INFO: Finished Backup of VM 112 (00:00:11)
INFO: Backup finished at 2021-10-14 22:42:35
Result: {
"data": null
}
INFO: Backup job finished with errors

TASK ERROR: job errors

You need something else`?

Many thanks for help

best regards
 
2021-10-15T01:33:35+02:00: POST /fixed_chunk: 400 Bad Request: creating temporary chunk on store 'PBS3' failed for 7812b20b78f18ee07ef9b34d1b9f015f4091e65360714feba79f829d57ac6248 - Invalid exchange (os error 52)

is your storage ok ?

can you post the output of dmesg and the journal from that time?
also if you use zfs a 'zpool status' or similar would be helpful..