tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

fiveangle

Well-Known Member
Dec 23, 2020
45
12
48
San Francisco Bay Area
Since upgrading to PBS 4.1.4-1 (from 4.1.2-1) I've been getting a nightly fail on an LXC, where it just bombs out partially through starting with a
Code:
HTTP/2.0 connection
failed entry. I tested a full manual backup to local storage of the LXC and it completed successfully. The nightly backup failed again tonight, but this time left a bit of info behind, it seems to have gotten past the first couple http errors, but then bombed out with more details, suggesting to enable "RUST_TRACEBACK=1" env variable, but no where on the Iterwebs can i find where to enable this environment variable so that it is passed to the PBS client executing the scheduled backup on the LXC. And I appears I cannot instantiate it by hand since the command fires after some automation to take the PBS snapshot, then passes those "vzsnap0" zfs snapshots as arguments for the backup. I'm at a lost on where to go from here. The failed log in question:

Code:
INFO: Backup finished at 2026-03-01 22:21:38
INFO: Starting Backup of VM 1010 (lxc)
INFO: Backup started at 2026-03-01 22:21:38
INFO: status = running
INFO: CT Name: storage
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/var/lib/docker') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
INFO: resume vm
INFO: guest is online again after 2 seconds
INFO: creating Proxmox Backup Server archive 'ct/1010/2026-03-02T06:21:38Z'
INFO: set max number of entries in memory for file-based backups to 1048576
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp942816_1010/etc/vzdump/pct.conf fw.conf:/var/tmp/vzdumptmp942816_1010/etc/vzdump/pct.fw root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --include-dev /mnt/vzsnap0/./var/lib/docker --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 1010 --backup-time 1772432498 --change-detection-mode metadata --entries-max 1048576 --repository root@pam@pbs.5angle.com:datastore --ns pve.5angle.com
INFO: Starting backup: [pve.5angle.com]:ct/1010/2026-03-02T06:21:38Z   
INFO: Client name: richie   
INFO: Starting backup protocol: Sun Mar  1 22:21:40 2026   
INFO: Downloading previous manifest (Tue Feb 17 21:17:49 2026)   
INFO: Upload config file '/var/tmp/vzdumptmp942816_1010/etc/vzdump/pct.conf' to 'root@pam@pbs.5angle.com:8007:datastore' as pct.conf.blob   
INFO: Upload config file '/var/tmp/vzdumptmp942816_1010/etc/vzdump/pct.fw' to 'root@pam@pbs.5angle.com:8007:datastore' as fw.conf.blob   
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@pbs.5angle.com:8007:datastore' as root.mpxar.didx   
INFO: Using previous index as metadata reference for 'root.mpxar.didx'   
INFO: processed 1.267 GiB in 1m, uploaded 916.201 MiB
INFO: processed 1.9 GiB in 2m, uploaded 1.346 GiB
INFO: processed 3.142 GiB in 3m, uploaded 1.938 GiB
INFO: processed 3.234 GiB in 4m, uploaded 2.009 GiB
INFO: processed 3.234 GiB in 5m, uploaded 2.009 GiB
INFO: processed 21.629 GiB in 6m, uploaded 2.018 GiB
INFO: processed 29.789 GiB in 7m, uploaded 2.108 GiB
INFO: HTTP/2.0 connection failed
INFO: HTTP/2.0 connection failed
INFO: processed 38.62 GiB in 8m, uploaded 2.277 GiB
INFO: processed 38.62 GiB in 9m, uploaded 2.277 GiB
INFO: processed 38.62 GiB in 10m, uploaded 2.277 GiB
INFO: processed 38.62 GiB in 11m, uploaded 2.277 GiB
INFO: thread 'tokio-runtime-worker' panicked at /usr/share/cargo/registry/proxmox-backup-4.1.4/pbs-client/src/chunk_stream.rs:125:49:
INFO: called `Result::unwrap()` on an `Err` value: SendError { .. }
INFO: note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
INFO: unclosed encoder dropped
INFO: closed encoder dropped with state
INFO: unfinished encoder state dropped
INFO: unfinished encoder state dropped
INFO: Error: upload failed: pipelined request failed: connection reset - connection reset
INFO: cleanup temporary 'vzdump' snapshot
ERROR: Backup of VM 1010 failed - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup '--crypt-mode=none' pct.conf:/var/tmp/vzdumptmp942816_1010/etc/vzdump/pct.conf fw.conf:/var/tmp/vzdumptmp942816_1010/etc/vzdump/pct.fw root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --include-dev /mnt/vzsnap0/./var/lib/docker --skip-lost-and-found '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' --backup-type ct --backup-id 1010 --backup-time 1772432498 --change-detection-mode metadata --entries-max 1048576 --repository root@pam@pbs.5angle.com:datastore --ns pve.5angle.com' failed: exit code 255
INFO: Failed at 2026-03-01 22:33:31

Any ideas how to enable more verbose logging ?
 
is there anything sitting between your PVE and PBS host that might drop the connection prematurely?
 
I believe something indeed has occurred within the LXC or host itself as i have noticed during troubleshooting that the builtin console view of PBS, when running btop while a backup is running, just stops at some point, requiring a browser tab refresh to re-attach to, but there isn't anything obvious in the dpkg logs to tip where things may have broken at all by any os pkg updates. Will attempt to restore it on another pve host to narrow down if a problem within the lxc or the specific host it's running on. Or perhaps load a fresh install of PBS within a new LXC to see if it encounters similar.

This is potentially frightening as i can't fathom an appropriate root cause other than a new basic fundamental incompatibility regression somewhere.

Running PBS within a local LXC with it shuttling it's datastore via offsite sync nightly has been such a huge win for us, it would be a shame to have to go back to discreet hardware.

Will continue to report my findings.
 
could you check the logs of the PBS system and the backup writer and reader task logs as well?