PBS sync failed each time

jenssen99

Member
Aug 7, 2019
24
6
23
43
Hello,

My sync job on a second (remote PBS) keeps failing, someone any idea why?

Do not see many info on the log, see below.

Many thanks.

2022-08-20T14:00:23+02:00: Starting datastore sync job 'PBS01:Backup:Backup::s-4de46c54-8fc7'
2022-08-20T14:00:23+02:00: sync datastore 'Backup' from 'PBS01/Backup'
2022-08-20T14:00:23+02:00: ----
2022-08-20T14:00:23+02:00: Syncing datastore Backup, root namespace into datastore Backup, root namespace
2022-08-20T14:00:23+02:00: found 11 groups to sync
2022-08-20T14:00:23+02:00: re-sync snapshot ct/101/2022-08-20T00:00:00Z
2022-08-20T14:00:23+02:00: no data changes
2022-08-20T14:00:23+02:00: re-sync snapshot ct/101/2022-08-20T00:00:00Z done
2022-08-20T14:00:23+02:00: percentage done: 9.09% (1/11 groups)
2022-08-20T14:00:23+02:00: skipped: 4 snapshot(s) (2022-08-18T07:59:49Z .. 2022-08-19T08:34:36Z) older than the newest local snapshot
2022-08-20T14:00:23+02:00: re-sync snapshot ct/102/2022-08-20T00:00:35Z
2022-08-20T14:00:23+02:00: no data changes
2022-08-20T14:00:23+02:00: re-sync snapshot ct/102/2022-08-20T00:00:35Z done
2022-08-20T14:00:23+02:00: percentage done: 18.18% (2/11 groups)
2022-08-20T14:00:23+02:00: skipped: 4 snapshot(s) (2022-08-18T08:00:48Z .. 2022-08-19T08:35:07Z) older than the newest local snapshot
2022-08-20T14:00:23+02:00: re-sync snapshot ct/103/2022-08-20T00:00:51Z
2022-08-20T14:00:23+02:00: no data changes
2022-08-20T14:00:23+02:00: re-sync snapshot ct/103/2022-08-20T00:00:51Z done
2022-08-20T14:00:23+02:00: percentage done: 27.27% (3/11 groups)
2022-08-20T14:00:23+02:00: skipped: 4 snapshot(s) (2022-08-18T08:01:20Z .. 2022-08-19T08:35:23Z) older than the newest local snapshot
2022-08-20T14:00:23+02:00: re-sync snapshot ct/104/2022-08-20T00:01:02Z
2022-08-20T14:00:23+02:00: no data changes
2022-08-20T14:00:23+02:00: re-sync snapshot ct/104/2022-08-20T00:01:02Z done
2022-08-20T14:00:23+02:00: percentage done: 36.36% (4/11 groups)
2022-08-20T14:00:23+02:00: skipped: 4 snapshot(s) (2022-08-18T08:01:46Z .. 2022-08-19T08:35:34Z) older than the newest local snapshot
2022-08-20T14:00:24+02:00: re-sync snapshot vm/201/2022-08-20T00:00:02Z
2022-08-20T14:00:24+02:00: no data changes
2022-08-20T14:00:24+02:00: re-sync snapshot vm/201/2022-08-20T00:00:02Z done
2022-08-20T14:00:24+02:00: percentage done: 45.45% (5/11 groups)
2022-08-20T14:00:24+02:00: skipped: 4 snapshot(s) (2022-08-18T08:10:29Z .. 2022-08-19T08:51:01Z) older than the newest local snapshot
2022-08-20T14:00:24+02:00: sync snapshot vm/202/2022-08-18T08:11:31Z
2022-08-20T14:00:24+02:00: sync archive qemu-server.conf.blob
2022-08-20T14:00:24+02:00: sync archive drive-virtio1.img.fidx
2022-08-20T14:19:05+02:00: percentage done: 47.73% (5/11 groups, 1/4 snapshots in group #6)
2022-08-20T14:19:05+02:00: sync group vm/202 failed - error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac:../ssl/record/ssl3_record.c:676:
2022-08-20T14:19:06+02:00: re-sync snapshot vm/203/2022-08-20T00:01:32Z
2022-08-20T14:19:06+02:00: no data changes
2022-08-20T14:19:06+02:00: re-sync snapshot vm/203/2022-08-20T00:01:32Z done
2022-08-20T14:19:06+02:00: percentage done: 63.64% (7/11 groups)
2022-08-20T14:19:06+02:00: skipped: 4 snapshot(s) (2022-08-18T08:02:09Z .. 2022-08-19T08:49:02Z) older than the newest local snapshot
2022-08-20T14:19:06+02:00: re-sync snapshot vm/204/2022-08-20T00:01:35Z
2022-08-20T14:19:06+02:00: no data changes
2022-08-20T14:19:06+02:00: re-sync snapshot vm/204/2022-08-20T00:01:35Z done
2022-08-20T14:19:06+02:00: percentage done: 72.73% (8/11 groups)
2022-08-20T14:19:06+02:00: skipped: 4 snapshot(s) (2022-08-18T08:04:10Z .. 2022-08-19T08:49:06Z) older than the newest local snapshot
2022-08-20T14:19:06+02:00: re-sync snapshot vm/205/2022-08-20T00:00:04Z
2022-08-20T14:19:06+02:00: no data changes
2022-08-20T14:19:06+02:00: re-sync snapshot vm/205/2022-08-20T00:00:04Z done
2022-08-20T14:19:06+02:00: percentage done: 81.82% (9/11 groups)
2022-08-20T14:19:06+02:00: skipped: 5 snapshot(s) (2022-08-18T07:59:49Z .. 2022-08-19T08:51:03Z) older than the newest local snapshot
2022-08-20T14:19:06+02:00: re-sync snapshot vm/206/2022-08-20T00:00:17Z
2022-08-20T14:19:06+02:00: no data changes
2022-08-20T14:19:06+02:00: re-sync snapshot vm/206/2022-08-20T00:00:17Z done
2022-08-20T14:19:06+02:00: percentage done: 90.91% (10/11 groups)
2022-08-20T14:19:06+02:00: skipped: 5 snapshot(s) (2022-08-18T08:00:54Z .. 2022-08-19T08:51:04Z) older than the newest local snapshot
2022-08-20T14:19:06+02:00: re-sync snapshot vm/999/2022-08-20T00:00:22Z
2022-08-20T14:19:06+02:00: no data changes
2022-08-20T14:19:06+02:00: re-sync snapshot vm/999/2022-08-20T00:00:22Z done
2022-08-20T14:19:06+02:00: percentage done: 100.00% (11/11 groups)
2022-08-20T14:19:06+02:00: skipped: 5 snapshot(s) (2022-08-18T08:03:02Z .. 2022-08-19T08:51:06Z) older than the newest local snapshot
2022-08-20T14:19:06+02:00: Finished syncing namespace , current progress: 10 groups, 6 snapshots
2022-08-20T14:19:06+02:00: TASK ERROR: sync failed with some errors.
 
Hello Hannes, busy with the verify since this afternoon, but because it is a big one, it is still busy. Will let you know the result.
 
Hello,

See the attachment with the result of the verify of all VM's on the source Proxmox Backup.

Please advise, thanks.
 

Attachments

Does it always fail at the same? If it does, does syncing just vm/202 work?

Sidenote: you could have just run the verification for vm/202 by clicking the "V" in the actions column.
 
Does it always fail at the same? If it does, does syncing just vm/202 work?

Sidenote: you could have just run the verification for vm/202 by clicking the "V" in the actions column.

Yes, always vm/202, that VM is still not in the "content" of the remote PBS. All other VM's are in the content and have 5 counts.

How can I run a sync for only one VM? I do not see this option via GUI.
 
You can create a new sync job and set a group filter to type Group and value vm/202.
 
Same result:

2022-08-24T09:30:44+02:00: Starting datastore sync job 'PBS01:Backup:Backup::s-4d88402a-2f0d'
2022-08-24T09:30:44+02:00: sync datastore 'Backup' from 'PBS01/Backup'
2022-08-24T09:30:44+02:00: ----
2022-08-24T09:30:44+02:00: Syncing datastore Backup, root namespace into datastore Backup, root namespace
2022-08-24T09:30:44+02:00: found 1 groups to sync (out of 11 total)
2022-08-24T09:30:44+02:00: sync snapshot vm/202/2022-08-18T08:11:31Z
2022-08-24T09:30:44+02:00: sync archive qemu-server.conf.blob
2022-08-24T09:30:56+02:00: sync archive drive-virtio1.img.fidx
2022-08-24T09:33:08+02:00: percentage done: 25.00% (1/4 snapshots)
2022-08-24T09:33:08+02:00: sync group vm/202 failed - error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac:../ssl/record/ssl3_record.c:676:
2022-08-24T09:33:08+02:00: Finished syncing namespace , current progress: 0 groups, 1 snapshots
2022-08-24T09:33:08+02:00: TASK ERROR: sync failed with some errors
 
Last edited:
Just made a backup on the source PBS, that is still working fine;

2022-08-24T09:43:08+02:00: starting new backup on datastore 'Backup': "vm/202/2022-08-24T07:42:56Z"
2022-08-24T09:43:08+02:00: download 'index.json.blob' from previous backup.
2022-08-24T09:43:08+02:00: register chunks in 'drive-virtio0.img.fidx' from previous backup.
2022-08-24T09:43:08+02:00: download 'drive-virtio0.img.fidx' from previous backup.
2022-08-24T09:43:08+02:00: created new fixed index 1 ("vm/202/2022-08-24T07:42:56Z/drive-virtio0.img.fidx")
2022-08-24T09:43:08+02:00: register chunks in 'drive-virtio1.img.fidx' from previous backup.
2022-08-24T09:43:09+02:00: download 'drive-virtio1.img.fidx' from previous backup.
2022-08-24T09:43:10+02:00: created new fixed index 2 ("vm/202/2022-08-24T07:42:56Z/drive-virtio1.img.fidx")
2022-08-24T09:43:10+02:00: add blob "/mnt/datastore/backup/vm/202/2022-08-24T07:42:56Z/qemu-server.conf.blob" (429 bytes, comp: 429)
2022-08-24T09:51:55+02:00: Upload statistics for 'drive-virtio1.img.fidx'
2022-08-24T09:51:55+02:00: UUID: 982b5e8913864e32a52781385172a59a
2022-08-24T09:51:55+02:00: Checksum: 81dcd79a6c95649ad5351e7750a1d563a75ca633bfc3ba94363f02bd9ca80e2e
2022-08-24T09:51:55+02:00: Size: 42161143808
2022-08-24T09:51:55+02:00: Chunk count: 10052
2022-08-24T09:51:55+02:00: Upload size: 21189623808 (50%)
2022-08-24T09:51:55+02:00: Duplicates: 5000+1 (49%)
2022-08-24T09:51:55+02:00: Compression: 98%
2022-08-24T09:51:55+02:00: successfully closed fixed index 2
2022-08-24T09:51:55+02:00: Upload statistics for 'drive-virtio0.img.fidx'
2022-08-24T09:51:55+02:00: UUID: 60e5fdca2f904e3fb3a583e5b0bf4495
2022-08-24T09:51:55+02:00: Checksum: fe4e99162dbeb898dec8d3ab317e2753b04f3cc59b2a5c3b17ea7400e4d15d33
2022-08-24T09:51:55+02:00: Size: 306184192
2022-08-24T09:51:55+02:00: Chunk count: 73
2022-08-24T09:51:55+02:00: Upload size: 306184192 (100%)
2022-08-24T09:51:55+02:00: Duplicates: 0+1 (1%)
2022-08-24T09:51:55+02:00: Compression: 10%
2022-08-24T09:51:55+02:00: successfully closed fixed index 1
2022-08-24T09:51:55+02:00: add blob "/mnt/datastore/backup/vm/202/2022-08-24T07:42:56Z/index.json.blob" (382 bytes, comp: 382)
2022-08-24T09:51:55+02:00: successfully finished backup
2022-08-24T09:51:55+02:00: backup finished successfully
2022-08-24T09:51:55+02:00: TASK OK
 
This issue is solved, the problem was related to the network interface of the first (local) Proxmox Backup Server. This hardware has a Intel NIC with "Intel NUC “e1000e reset adapter unexpectedly” netwerk issues (see other forums on the internet). So, I disabled TSO (TCP Segmentation Offload) in the /etc/network/interfaces file, then the backup is running fine. Without this option configured, the small VMs did backup, but the larger one did have random problems with the message "error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac:../ssl/record/ssl3_record.c:676". I did not had any problems without the TSO adjustment with the local backup, also no problems when adding the remote PBS server to the local Proxmox cluster and then running the backup, this worked fine too. When running the sync job between the remote PBS server and the local PBS server, the problem occured. I tried all other things like switching backupdata running over a site-to-site VPN connection to an internet connection, but nothing solved this issue, only the TSO adjustment (adding "pre-up ethtool --offload enp0s25 tso off") to the /etc/network/interfaces file.
 
  • Like
Reactions: mow and ITT
EDIT: See bottom of this post for resolution.

I'm having a similar issue, though it's only happening with one specific VM on one of my nodes. I have 2 nodes with 2 VMs each. 3 backup just fine. The one that fails is about 8GB in size. The others that backup fine are 4GB, 8GB, and 256GB.

I tried adding pre-up ethtool --offload enp2s0 tso off to the interfaces file in my PBS and rebooting but no luck. It fails in different places each time - typically between 20 and 30 percent.

Code:
INFO: starting new backup job: vzdump 210 --notes-template '{{guestname}}' --remove 0 --storage pvebk --mode snapshot --node proxmox-ve1
INFO: Starting Backup of VM 210 (qemu)
INFO: Backup started at 2023-07-17 20:27:46
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: Ubuntu-Server-Docker
INFO: include disk 'scsi0' 'LVM_Sabrent1TB:vm-210-disk-0' 128G
INFO: creating Proxmox Backup Server archive 'vm/210/2023-07-18T03:27:46Z'
INFO: starting kvm to execute backup task
INFO: started backup task '35263745-c41f-488a-a026-27c3fdd64a18'
INFO: scsi0: dirty-bitmap status: created new
INFO:   0% (528.0 MiB of 128.0 GiB) in 3s, read: 176.0 MiB/s, write: 136.0 MiB/s
INFO:   1% (2.1 GiB of 128.0 GiB) in 7s, read: 416.0 MiB/s, write: 115.0 MiB/s
INFO:   2% (2.6 GiB of 128.0 GiB) in 10s, read: 153.3 MiB/s, write: 153.3 MiB/s
INFO:   3% (4.0 GiB of 128.0 GiB) in 18s, read: 186.5 MiB/s, write: 170.0 MiB/s
INFO:   4% (5.2 GiB of 128.0 GiB) in 23s, read: 236.0 MiB/s, write: 232.0 MiB/s
INFO:   5% (6.6 GiB of 128.0 GiB) in 30s, read: 197.7 MiB/s, write: 175.4 MiB/s
INFO:   6% (7.7 GiB of 128.0 GiB) in 35s, read: 244.0 MiB/s, write: 244.0 MiB/s
INFO:   7% (9.1 GiB of 128.0 GiB) in 38s, read: 476.0 MiB/s, write: 261.3 MiB/s
INFO:   8% (10.9 GiB of 128.0 GiB) in 41s, read: 617.3 MiB/s, write: 244.0 MiB/s
INFO:   9% (11.6 GiB of 128.0 GiB) in 45s, read: 170.0 MiB/s, write: 140.0 MiB/s
INFO:  10% (13.4 GiB of 128.0 GiB) in 51s, read: 302.0 MiB/s, write: 186.7 MiB/s
INFO:  11% (14.8 GiB of 128.0 GiB) in 54s, read: 497.3 MiB/s, write: 225.3 MiB/s
INFO:  12% (15.4 GiB of 128.0 GiB) in 57s, read: 202.7 MiB/s, write: 181.3 MiB/s
INFO:  13% (16.7 GiB of 128.0 GiB) in 1m 2s, read: 262.4 MiB/s, write: 211.2 MiB/s
INFO:  14% (18.0 GiB of 128.0 GiB) in 1m 8s, read: 215.3 MiB/s, write: 169.3 MiB/s
INFO:  15% (19.3 GiB of 128.0 GiB) in 1m 13s, read: 274.4 MiB/s, write: 198.4 MiB/s
INFO:  16% (20.6 GiB of 128.0 GiB) in 1m 19s, read: 216.0 MiB/s, write: 192.0 MiB/s
INFO:  17% (22.2 GiB of 128.0 GiB) in 1m 23s, read: 409.0 MiB/s, write: 209.0 MiB/s
INFO:  18% (23.5 GiB of 128.0 GiB) in 1m 26s, read: 436.0 MiB/s, write: 216.0 MiB/s
INFO:  19% (24.3 GiB of 128.0 GiB) in 1m 30s, read: 222.0 MiB/s, write: 175.0 MiB/s
INFO:  20% (26.2 GiB of 128.0 GiB) in 1m 33s, read: 657.3 MiB/s, write: 185.3 MiB/s
INFO:  21% (27.0 GiB of 128.0 GiB) in 1m 38s, read: 156.0 MiB/s, write: 155.2 MiB/s
INFO:  22% (28.5 GiB of 128.0 GiB) in 1m 46s, read: 186.0 MiB/s, write: 153.0 MiB/s
INFO:  23% (29.6 GiB of 128.0 GiB) in 1m 51s, read: 242.4 MiB/s, write: 203.2 MiB/s
INFO:  24% (30.7 GiB of 128.0 GiB) in 2m 3s, read: 93.7 MiB/s, write: 90.3 MiB/s
INFO:  25% (32.0 GiB of 128.0 GiB) in 2m 21s, read: 73.3 MiB/s, write: 64.4 MiB/s
INFO:  26% (33.3 GiB of 128.0 GiB) in 2m 40s, read: 69.3 MiB/s, write: 65.9 MiB/s
INFO:  27% (34.6 GiB of 128.0 GiB) in 3m 2s, read: 59.5 MiB/s, write: 53.6 MiB/s
INFO:  28% (35.9 GiB of 128.0 GiB) in 3m 23s, read: 61.7 MiB/s, write: 58.5 MiB/s
INFO:  29% (37.2 GiB of 128.0 GiB) in 3m 44s, read: 63.8 MiB/s, write: 48.2 MiB/s
INFO:  29% (38.2 GiB of 128.0 GiB) in 3m 59s, read: 69.3 MiB/s, write: 53.9 MiB/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: error:0A0003FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac:../ssl/record/rec_layer_s3.c:1586:SSL alert number 20
INFO: aborting backup job
INFO: stopping kvm after backup task
trying to acquire lock...
 OK
ERROR: Backup of VM 210 failed - backup write data failed: command error: write_data upload error: pipelined request failed: error:0A0003FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac:../ssl/record/rec_layer_s3.c:1586:SSL alert number 20
INFO: Failed at 2023-07-17 20:31:48
INFO: Backup job finished with errors
TASK ERROR: job errors

----------------------

Code:
Jul 17 20:27:48 pbs proxmox-backup-proxy[658]: starting new backup on datastore 'backups-4TB': "vm/210/2023-07-18T03:27:46Z"
Jul 17 20:27:48 pbs proxmox-backup-proxy[658]: GET /previous: 400 Bad Request: no valid previous backup
Jul 17 20:27:48 pbs proxmox-backup-proxy[658]: created new fixed index 1 ("vm/210/2023-07-18T03:27:46Z/drive-scsi0.img.fidx")
Jul 17 20:27:48 pbs proxmox-backup-proxy[658]: add blob "/mnt/datastore/backups-4TB/vm/210/2023-07-18T03:27:46Z/qemu-server.conf.blob" (393 bytes, comp: 393)
Jul 17 20:31:47 pbs proxmox-backup-proxy[658]: backup failed: connection error: error:0A000119:SSL routines:ssl3_get_record:decryption failed or bad record mac:../ssl/record/ssl3_record.c:622:
Jul 17 20:31:47 pbs proxmox-backup-proxy[658]: removing failed backup
Jul 17 20:31:47 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: error reading a body from connection: error:0A000119:SSL routines:ssl3_get_record:decryption failed or bad record mac:../ssl/record/ssl3_record.c:622:
Jul 17 20:31:47 pbs proxmox-backup-[658]: pbs proxmox-backup-proxy[658]: removing backup snapshot "/mnt/datastore/backups-4TB/vm/210/2023-07-18T03:27:46Z"
Jul 17 20:31:47 pbs proxmox-backup-proxy[658]: TASK ERROR: connection error: error:0A000119:SSL routines:ssl3_get_record:decryption failed or bad record mac:../ssl/record/ssl3_record.c:622:
Jul 17 20:31:47 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:47 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:47 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:48 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:49 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:49 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:49 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:49 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:49 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:49 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:49 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:49 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:49 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:50 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:50 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:50 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:50 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:50 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:50 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:50 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:50 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:50 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:51 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:52 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:52 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:52 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:52 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:52 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Jul 17 20:31:52 pbs proxmox-backup-proxy[658]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.

Would love to get this fixed.

UPDATE:
It's only a single successful backup, but this appears to be working now. It makes no sense to me why only one container would fail, but regardless...

I used this post to resolve the issue.
https://forum.proxmox.com/threads/e1000e-reset-adapter-unexpectedly.87769/post-384609

I used the "this boot only" method after installing ethtool.
FWIW this appears to be affecting other systems as well, which supports the hypothesis that it's a driver issue.
https://xcp-ng.org/forum/topic/7463...ailed-or-bad-record-when-runing-full-backup/2
 
Last edited:
  • Like
Reactions: arukashi
I have the same error on some CTs/VMs. However, in my case there are no network driver errors. The two sites are connected via Tailscale. When I tried direct connection by opening a port on my router, the error disappeared and I could sync/backup normally.
I wonder if I can fix the error without opening a port on my router.
 
I have the same error on some CTs/VMs. However, in my case there are no network driver errors. The two sites are connected via Tailscale. When I tried direct connection by opening a port on my router, the error disappeared and I could sync/backup normally.
I wonder if I can fix the error without opening a port on my router.
I found this thread having the same error with SSL as the original post. I, too, am pushing through tailscale. It was working flawlessly for months but recently stopped working. I haven't run updates on anything in the path, so I'm perplexed about what may have changed. I tried updating PVE and PBS, but that didn't help.

All of my NICs are Intel. Disabling TSO didn't help.

I'm wondering if this could be an MTU problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!