[SOLVED] Backups fails - SEGFAULT tokio-runtime-w

Alexander Nilsson

New Member
May 26, 2019
27
0
1
Hi, all my backups of size larger than (about) 1Gb fails. Here is a sample task log:

Code:
INFO: starting new backup job: vzdump 124 --node vmh3 --mode snapshot --remove 0 --storage pbs-primary
INFO: Starting Backup of VM 124 (lxc)
INFO: Backup started at 2020-10-09 09:13:52
INFO: status = running
INFO: CT Name: bjornlokan
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
/dev/rbd4
INFO: creating Proxmox Backup Server archive 'ct/124/2020-10-09T07:13:52Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp841076/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 124 --backup-time 1602227632 --repository root@pam@backup.nilsson.link:primary
INFO: Starting backup: ct/124/2020-10-09T07:13:52Z
INFO: Client name: vmh3
INFO: Starting backup protocol: Fri Oct  9 09:13:53 2020
INFO: Upload config file '/var/tmp/vzdumptmp841076/etc/vzdump/pct.conf' to 'root@pam@backup.nilsson.link:8007:primary' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@backup.nilsson.link:8007:primary' as root.pxar.didx
INFO: root.pxar: had to upload 65.51 MiB of 1.35 GiB in 78.92s, average speed 849.95 KiB/s).
INFO: root.pxar: backup was done incrementally, reused 1.29 GiB (95.3%)
INFO: Uploaded backup catalog (781.80 KiB)
INFO: catalog upload error - broken pipe
INFO: Error: broken pipe
INFO: remove vzdump snapshot
Removing snap: 100% complete...done.
ERROR: Backup of VM 124 failed - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup '--crypt-mode=none' pct.conf:/var/tmp/vzdumptmp841076/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 124 --backup-time 1602227632 --repository root@pam@backup.nilsson.link:primary' failed: exit code 255
INFO: Failed at 2020-10-09 09:15:13
INFO: Backup job finished with errors
TASK ERROR: job errors

And here is the syslog (on the node) from the same timeframe:

Code:
Oct 09 09:13:52 vmh3 pvedaemon[1528]: <root@pam> starting task UPID:vmh3:000CD574:0088123B:5F800DB0:vzdump:124:root@pam:
Oct 09 09:13:52 vmh3 pvedaemon[841076]: INFO: starting new backup job: vzdump 124 --node vmh3 --mode snapshot --remove 0 --storage pbs-primary
Oct 09 09:13:52 vmh3 pvedaemon[841076]: INFO: Starting Backup of VM 124 (lxc)
Oct 09 09:13:52 vmh3 kernel: rbd: rbd4: capacity 8589934592 features 0x3d
Oct 09 09:13:52 vmh3 kernel: EXT4-fs (rbd4): mounted filesystem without journal. Opts: noload
Oct 09 09:14:00 vmh3 systemd[1]: Starting Proxmox VE replication runner...
Oct 09 09:14:01 vmh3 systemd[1]: pvesr.service: Succeeded.
Oct 09 09:14:01 vmh3 systemd[1]: Started Proxmox VE replication runner.
Oct 09 09:15:00 vmh3 systemd[1]: Starting Proxmox VE replication runner...
Oct 09 09:15:01 vmh3 systemd[1]: pvesr.service: Succeeded.
Oct 09 09:15:01 vmh3 systemd[1]: Started Proxmox VE replication runner.
Oct 09 09:15:13 vmh3 pvedaemon[841076]: ERROR: Backup of VM 124 failed - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup '--crypt-mode=none' pct.conf:/var/tmp/vzdumptmp841076/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 124 --backup-time 1602227632 --repository root@pam@backup.nilsson.link:primary' failed: exit code 255
Oct 09 09:15:13 vmh3 pvedaemon[841076]: INFO: Backup job finished with errors
Oct 09 09:15:13 vmh3 pvedaemon[841076]: job errors
Oct 09 09:15:13 vmh3 pvedaemon[1528]: <root@pam> end task UPID:vmh3:000CD574:0088123B:5F800DB0:vzdump:124:root@pam: job errors

And probably more helpful is the syslog from the PBS (notice the tokio-runtime-w segfault):

Code:
Oct 09 09:13:53 backup proxmox-backup-proxy[616]: register worker
Oct 09 09:13:53 backup proxmox-backup-proxy[616]: FILE: "/var/log/proxmox-backup/tasks/EB/UPID:backup:00000268:000005EB:00000004:5F800DB1:backup:primary_ct_124:root@pam:"
Oct 09 09:13:53 backup proxmox-backup-proxy[616]: starting new backup on datastore 'primary': "ct/124/2020-10-09T07:13:52Z"
Oct 09 09:13:53 backup proxmox-backup-proxy[616]: download 'index.json.blob' from previous backup.
Oct 09 09:13:53 backup proxmox-backup-proxy[616]: add blob "/mnt/datastore/primary/ct/124/2020-10-09T07:13:52Z/pct.conf.blob" (189 bytes, comp: 189)
Oct 09 09:13:53 backup proxmox-backup-proxy[616]: created new dynamic index 1 ("ct/124/2020-10-09T07:13:52Z/catalog.pcat1.didx")
Oct 09 09:13:53 backup proxmox-backup-proxy[616]: register chunks in 'root.pxar.didx' from previous backup.
Oct 09 09:13:53 backup proxmox-backup-proxy[616]: download 'root.pxar.didx' from previous backup.
Oct 09 09:13:53 backup proxmox-backup-proxy[616]: created new dynamic index 2 ("ct/124/2020-10-09T07:13:52Z/root.pxar.didx")
Oct 09 09:13:54 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:00 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:01 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:05 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:11 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:11 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:14 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:20 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:21 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:24 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:30 backup proxmox-backup-proxy[616]: error during snapshot file listing: 'unable to load blob '"/mnt/datastore/primary/ct/124/2020-10-09T07:13:52Z/index.json.blob"' - No such file or directory (os error 2)'
Oct 09 09:14:30 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:31 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:35 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:41 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:41 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:44 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:50 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:52 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:14:54 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:15:00 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:15:01 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:15:05 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:15:10 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:15:11 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Upload statistics for 'root.pxar.didx'
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: UUID: 2c2dc08b9db64509a8879f7191fb1243
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Checksum: 5e5c69866564e361e1aee994c106a43050c2c1e146a24b447cf8c00e08d55c24
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Size: 1450963972
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Chunk count: 424
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Upload size: 68688884 (4%)
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Duplicates: 412+0 (97%)
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Compression: 20%
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: successfully closed dynamic index 2
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Upload statistics for 'catalog.pcat1.didx'
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: UUID: dab73f205b264d01be947549c0e1d5a3
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Checksum: e4e5cb17d52c829c280456647b3cf98886e2cdcae8c7b4794a70904bf4f2523b
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Size: 800568
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Chunk count: 4
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Upload size: 800568 (100%)
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Duplicates: 0+2 (50%)
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: Compression: 41%
Oct 09 09:15:12 backup proxmox-backup-proxy[616]: successfully closed dynamic index 1
Oct 09 09:15:12 backup kernel: tokio-runtime-w[922]: segfault at 563de1e69000 ip 0000563de1ad56e0 sp 00007f7f5230aa60 error 4 in proxmox-backup-proxy[563de14cb000+6e8000]
Oct 09 09:15:12 backup kernel: Code: 00 00 00 01 00 00 48 8b 53 08 49 8d 40 ff 44 89 c6 83 e6 07 48 83 f8 07 72 71 48 89 f0 4c 29 c0 66 2e 0f 1f 84 00 00 00 00 00 <0f> b6 1a 48 31 fb 48 0f af d9 0f b6 7a 01 48 31 df 48 0f af f9 0f
Oct 09 09:15:12 backup systemd[1]: proxmox-backup-proxy.service: Main process exited, code=killed, status=11/SEGV
Oct 09 09:15:12 backup systemd[1]: proxmox-backup-proxy.service: Failed with result 'signal'.
Oct 09 09:15:12 backup systemd[1]: proxmox-backup-proxy.service: Service RestartSec=100ms expired, scheduling restart.
Oct 09 09:15:12 backup systemd[1]: proxmox-backup-proxy.service: Scheduled restart job, restart counter is at 1.
Oct 09 09:15:12 backup systemd[1]: Stopped Proxmox Backup API Proxy Server.
Oct 09 09:15:12 backup systemd[1]: Starting Proxmox Backup API Proxy Server...
Oct 09 09:15:12 backup systemd[1]: Started Proxmox Backup API Proxy Server.
Oct 09 09:15:14 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:15:14 backup proxmox-backup-proxy[931]: Detected stopped UPID UPID:backup:00000268:000005EB:00000004:5F800DB1:backup:primary_ct_124:root@pam:
Oct 09 09:15:20 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:15:21 backup proxmox-backup-api[590]: successful auth for user 'root@pam'
Oct 09 09:15:24 backup proxmox-backup-api[590]: successful auth for user 'root@pam'

A few relevant facts:
  • The entry for the failed backup in the Datastore Content list (on the bps) has a spinning wheel instead of the size.
  • Verification of the backup fails with "manifest load error: unable to load blob '"/mnt/datastore/primary/ct/124/2020-10-09T07:13:52Z/index.json.blob"' - No such file or directory (os error 2)"
  • The dashboard does not mention any failed backups.
  • All my backups (with the exception of a few tiny containers) have failed during the last two days. This is true for all 3 of my nodes.
  • It started happening right after I uppgraded all my nodes and my PBS to the latest versions:
    • PBS: 0.9-0
    • PVE: 6.2-12
  • Deleting the bad backups have no effect.
I will gladly accept any help you can provide, and please tell me what more info you need to know.

Thanks!
 
could you try to get a coredump on PBS (e.g. by attaching gdb to the running proxmox-backup-proxy, or setting up coredumpctl) and extract a backtrace (in gdb: 'thread apply all bt full') from that? please also include the exact version of proxmox-backup-server and -client, thanks!
 
forgot to add: please install 'proxmox-backup-server-dbgsym' before extracing the backtrace ;)
 
Versions:

Code:
root@backup:~# dpkg -l | grep proxmox-backup
ii  proxmox-backup                 1.0-4                        all          Proxmox Backup Server metapackage
ii  proxmox-backup-client          0.9.0-2                      amd64        Proxmox Backup Client tools
ii  proxmox-backup-docs            0.9.0-2                      all          Proxmox Backup Documentation
ii  proxmox-backup-server          0.9.0-2                      amd64        Proxmox Backup Server daemon with tools and GUI
ii  proxmox-backup-server-dbgsym   0.9.0-2                      amd64        debug symbols for proxmox-backup-server
Code:
root@vmh3:~# dpkg -l | grep proxmox-backup
ii  libproxmox-backup-qemu0              0.7.0-1                      amd64        Proxmox Backup Server client library for QEMU
ii  proxmox-backup-client                0.9.0-2                      amd64        Proxmox Backup Client tools

small backtrace of the thread that segfault'ed:

Code:
#0  <fnv::FnvHasher as core::hash::Hasher>::write (bytes=..., self=<optimized out>) at /usr/share/cargo/registry/fnv-1.0.6/lib.rs:106
#1  <http::header::name::Custom as core::hash::Hash>::hash (self=<optimized out>, hasher=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/name.rs:2109
#2  <http::header::name::Repr<T> as core::hash::Hash>::hash (self=<optimized out>, state=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/name.rs:45
#3  <http::header::name::HeaderName as core::hash::Hash>::hash (self=<optimized out>, state=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/name.rs:33
#4  http::header::map::hash_elem_using (danger=<optimized out>, k=<optimized out>) at /usr/share/cargo/registry/http-0.2.1/src/header/map.rs:3223
#5  0x000055672eb0bbd6 in http::header::map::HeaderMap<T>::find (self=0x7f3f938750f0, key=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/map.rs:1287
#6  <&http::header::name::HeaderName as http::header::map::as_header_name::Sealed>::find (map=0x7f3f938750f0, self=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/map.rs:3389
#7  http::header::map::HeaderMap<T>::remove (self=0x7f3f938750f0, key=0x100000001b3) at /usr/share/cargo/registry/http-0.2.1/src/header/map.rs:1356
#8  0x000055672eb1aa91 in hyper::proto::h2::strip_connection_headers (headers=0x7f3f938750f0, is_request=false)
    at /usr/share/cargo/registry/hyper-0.13.7/src/proto/h2/mod.rs:44
#9  0x000055672e865439 in hyper::proto::h2::server::H2Stream<F,B>::poll2 (self=..., cx=<optimized out>)
    at /usr/share/cargo/registry/hyper-0.13.7/src/proto/h2/server.rs:398
#10 <hyper::proto::h2::server::H2Stream<F,B> as core::future::future::Future>::poll (self=..., cx=0x7f3f938753d0)
    at /usr/share/cargo/registry/hyper-0.13.7/src/proto/h2/server.rs:437
#11 0x000055672e8b5722 in tokio::runtime::task::core::Core<T,S>::poll::{{closure}} (ptr=<optimized out>)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/core.rs:173
#12 tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut (self=<optimized out>, f=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/loom/std/unsafe_cell.rs:14
#13 tokio::runtime::task::core::Core<T,S>::poll (self=0x7f3f70000c50, header=0x7f3f70000c20)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/core.rs:158
#14 tokio::runtime::task::harness::Harness<T,S>::poll::{{closure}} () at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/harness.rs:107
#15 core::ops::function::FnOnce::call_once () at /usr/src/rustc-1.45.0/src/libcore/ops/function.rs:232
#16 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>)
    at /usr/src/rustc-1.45.0/src/libstd/panic.rs:318
#17 0x000055672e7e073c in std::panicking::try::do_call (data=<optimized out>) at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:297
#18 std::panicking::try (f=<error reading variable: access outside bounds of object referenced via synthetic pointer>)
    at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:274
#19 std::panic::catch_unwind (f=<error reading variable: access outside bounds of object referenced via synthetic pointer>)
    at /usr/src/rustc-1.45.0/src/libstd/panic.rs:394
#20 tokio::runtime::task::harness::Harness<T,S>::poll (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/harness.rs:89
#21 0x000055672eb6b586 in tokio::runtime::task::raw::RawTask::poll (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/raw.rs:66
#22 tokio::runtime::task::Notified<S>::run (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/mod.rs:169
#23 tokio::runtime::thread_pool::worker::Context::run_task::{{closure}} () at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:349
#24 tokio::coop::with_budget::{{closure}} (cell=0x7f3f93876482) at /usr/share/cargo/registry/tokio-0.2.21/src/coop.rs:127
#25 std::thread::local::LocalKey<T>::try_with (self=<optimized out>, f=...) at /usr/src/rustc-1.45.0/src/libstd/thread/local.rs:263
#26 std::thread::local::LocalKey<T>::with (self=<optimized out>, f=...) at /usr/src/rustc-1.45.0/src/libstd/thread/local.rs:239
#27 0x000055672eb5eb52 in tokio::coop::with_budget (budget=..., f=...) at /usr/share/cargo/registry/tokio-0.2.21/src/coop.rs:120
#28 tokio::coop::budget (f=...) at /usr/share/cargo/registry/tokio-0.2.21/src/coop.rs:96
#29 tokio::runtime::thread_pool::worker::Context::run_task (self=<optimized out>, task=..., core=0x556730eaf700)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:348
#30 0x000055672eb5e51f in tokio::runtime::thread_pool::worker::Context::run (self=<optimized out>, core=0x556730eaf700)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:327
#31 0x000055672eb6d485 in tokio::runtime::thread_pool::worker::run::{{closure}} ()
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:305
#32 tokio::macros::scoped_tls::ScopedKey<T>::set (self=<optimized out>, t=0x7f3f93875780, f=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/macros/scoped_tls.rs:63
#33 0x000055672eb5dd3b in tokio::runtime::thread_pool::worker::run (worker=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:302
#34 0x000055672e64af77 in tokio::runtime::thread_pool::worker::block_in_place::{{closure}}::{{closure}} ()
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:256
#35 <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll (self=..., _cx=<optimized out>)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/blocking/task.rs:38
#36 tokio::runtime::task::core::Core<T,S>::poll::{{closure}} (ptr=0x7f3f702fc920) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/core.rs:173
#37 tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut (self=0x7f3f702fc920, f=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/loom/std/unsafe_cell.rs:14
#38 0x000055672e8bdaaa in tokio::runtime::task::core::Core<T,S>::poll (self=<optimized out>, header=0x7f3f702fc8f0)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/core.rs:158
#39 tokio::runtime::task::harness::Harness<T,S>::poll::{{closure}} () at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/harness.rs:107
#40 core::ops::function::FnOnce::call_once () at /usr/src/rustc-1.45.0/src/libcore/ops/function.rs:232
#41 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>)
    at /usr/src/rustc-1.45.0/src/libstd/panic.rs:318
#42 0x000055672e7da4c8 in std::panicking::try::do_call (data=<optimized out>) at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:297
#43 std::panicking::try (f=<error reading variable: access outside bounds of object referenced via synthetic pointer>)
    at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:274
#44 std::panic::catch_unwind (f=<error reading variable: access outside bounds of object referenced via synthetic pointer>)
    at /usr/src/rustc-1.45.0/src/libstd/panic.rs:394
#45 tokio::runtime::task::harness::Harness<T,S>::poll (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/harness.rs:89
#46 0x000055672eb5cc87 in tokio::runtime::task::raw::RawTask::poll (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/raw.rs:66
#47 tokio::runtime::task::Notified<S>::run (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/mod.rs:169
#48 tokio::runtime::blocking::pool::Inner::run (self=0x556730eaf990) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/blocking/pool.rs:230
#49 0x000055672eb7ada8 in tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}::{{closure}} ()
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/blocking/pool.rs:210
#50 tokio::runtime::context::enter (new=..., f=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/context.rs:72
#51 0x000055672eb6afdf in tokio::runtime::handle::Handle::enter (self=0x7f3f93875a20, f=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/handle.rs:76
#52 tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}} () at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/blocking/pool.rs:209
#53 std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /usr/src/rustc-1.45.0/src/libstd/sys_common/backtrace.rs:130
#54 0x000055672eb53fd2 in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} () at /usr/src/rustc-1.45.0/src/libstd/thread/mod.rs:475
#55 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>)
    at /usr/src/rustc-1.45.0/src/libstd/panic.rs:318
#56 std::panicking::try::do_call (data=<optimized out>) at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:297
#57 std::panicking::try (f=...) at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:274
#58 std::panic::catch_unwind (f=...) at /usr/src/rustc-1.45.0/src/libstd/panic.rs:394
#59 std::thread::Builder::spawn_unchecked::{{closure}} () at /usr/src/rustc-1.45.0/src/libstd/thread/mod.rs:474
#60 core::ops::function::FnOnce::call_once{{vtable-shim}} () at /usr/src/rustc-1.45.0/src/libcore/ops/function.rs:232
#61 0x000055672ebaac6a in std::sys::unix::thread::Thread::new::thread_start ()
#62 0x00007f3f94253fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#63 0x00007f3f946f14cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

I'm attaching the coredump so you can get the full backtrace yourself (incredibly long output). There are no secrets in the data anyway.

Seeing that libcrypto.so is involved here are the installed versions of those libraries:

Code:
root@backup:~# dpkg -l | grep -E 'crypto|ssl'
ii  gpg-agent                      2.2.12-1+deb10u1             amd64        GNU privacy guard - cryptographic agent
ii  libgnutls-openssl27:amd64      3.6.7-4+deb10u5              amd64        GNU TLS library - OpenSSL wrapper
ii  libhogweed4:amd64              3.4.1-1                      amd64        low level cryptographic library (public-key cryptos)
ii  libk5crypto3:amd64             1.17-3                       amd64        MIT Kerberos runtime libraries - Crypto Library
ii  libnettle6:amd64               3.4.1-1                      amd64        low level cryptographic library (symmetric and one-way cryptos)
ii  libssl1.1:amd64                1.1.1d-0+deb10u3             amd64        Secure Sockets Layer toolkit - shared libraries
ii  libzstd1:amd64                 1.3.8+dfsg-3                 amd64        fast lossless compression algorithm
ii  openssl                        1.1.1d-0+deb10u3             amd64        Secure Sockets Layer toolkit - cryptographic utility
ii  ssl-cert                       1.0.39                       all          simple debconf wrapper for OpenSSL

A few observations:
  • I tried this on a container of 1.3Gb, and it worked. But when I created a file with a few Gbs of /dev/urandom on the same container then the following segfault appeared.
  • The above file is probably why the core dump wouldn't compress to less than 250Mb (800Mb original size)

Link to coredump: https://cloud.nilsson.link/s/e3ECFx69SzSQ9qj (expires in a week)
 
Hello, I have the same problems. I have 2 PVE Installations
--> one with 6.2.10 --> no problems
--> another with 6.2.12 --> backup fail with exit code 255
 
Hello, I have the same problems. I have 2 PVE Installations
--> one with 6.2.10 --> no problems
--> another with 6.2.12 --> backup fail with exit code 255

and you also have a segfault on the PBS side?
 
@Alexander Nilsson :
  • I assume the problem is reproducible since you write that most backups fail sind the upgrade?
  • does it also trigger on VM backups?
  • is it still reproducible if you run systemctl restart proxmox-backup-proxy? (note: will kill all running tasks on the PBS side!)
 
and just to make sure - this only happens on backups over a certain size? is this deterministic, or just 'bigger backups are more likely to trigger it'? how is the load on the PBS side? running close to any resource limits? can you also trigger it by attempting a restore?

I'll try to get a special build ready with more debugging output added if you are able/willing to install that
 
I do not know if it is deterministic, but 90% of my VMs and containers fail. Of those that (sometimes) do not fail they are either on the smaller side or turned off. I expect that it is determined by the volume of data (or time from start to finish?) that is sent over the network and not by the actual disk size of the VM/container. I'm guessing that is the reason I cannot really give you a straight answer of where the limit appears to be.

I'm willing to install and test a debug version.

BTW, every time the backup fails it results in a backup-entry of size 1B. If I do not remove it manually I get a big bunch of followup errors about "TASK ERROR: connection error: Transport endpoint is not connected (os error 107)" when attempting to download "/mnt/datastore/primary/vm/125/2020-10-10T22:07:52Z/index.json.blob". I guess the file is missing due to the error above and the entire backup is in a inconsistent state.

Sadly though, the dashboard does not appear to recognise that any backups have failed. They still report all as successes, but I don't suppose this is very important.
 
thanks for the info. I'll get back to you tomorrow with some sort of debug build!
 
http://download.proxmox.com/temp/pbs-segfault-debug/

Code:
449c7c48aff919ad0ef8780c0c40cb4763be5f27ba105ba85c7f208361b00650  proxmox-backup-server_0.9.0-2_amd64.deb
4cd1bbdcda05381d58274c4d5883588421642b7649d0a6037f66962a49bf7358  proxmox-backup-server-dbgsym_0.9.0-2_amd64.deb

please disable all other clients (including pvestatd on PVE nodes configured to access this PBS instance) before attempting to reproduce using a single backup, and then post the resulting log (journalctl -u proxmox-backup-proxy). feel free to censor chunk digests if you don't feel comfortable including those in plain.

note: the log will be bigger than usual, after reproducing I suggest apt install --reinstall proxmox-backup-server proxmox-backup-server-dbgsym to revert to the stock version and reduce logging again.

please include the coredump again as well if you are able to reproduce with the debug build!
 
do you have some sort of reverse proxy or regular proxy between your client and pbs server (or any sort of custom client modifications)? because your instance of the server attempts to remove headers that should not be there. not saying that that is the issue itself, but might be why you can trigger it and I can't ;)
 
No, they are on the same network, nothing in-between.
EDIT: Also no custom modifications. Everything is a pretty standard PVE setup. Everything is setup through the GUI.
 
Last edited:
okay, uploaded another round of debug packages with a bit less output so hopefully you can still trigger the original issue while getting a bit of log pointing us into the right direction:

http://download.proxmox.com/temp/pbs-segfault-debug/

Code:
9ccdcfb413cd1232a4499e08449f15322e2d398cfd7b2143a593b792f4ddae75  proxmox-backup-server_0.9.1-1_amd64.deb
b52ed51414939a4cc1b9ee0cc6a3a5141da8c644eee3a50cb974dca8d2b6be9b  proxmox-backup-server-dbgsym_0.9.1-1_amd64.deb

they are based on current master but should be compatible with 0.9.0-2 clients.
 
Last edited:
corrected it (it's the same as last time, the previous packages are now in the 'v1' subdir). sorry!
 
Thanks, here are the logs:

Code:
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: register worker
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: FILE: "/var/log/proxmox-backup/tasks/25/UPID:backup:00006B37:05A69125:00000000:5F8E84A2:backup:primary_vm_100:root@pam:"
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: starting new backup on datastore 'primary': "vm/100/2020-10-20T06:33:05Z"
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: download 'index.json.blob' from previous backup.
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: header: "content-type" (24)
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: register chunks in 'drive-scsi0.img.fidx' from previous backup.
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: download 'drive-scsi0.img.fidx' from previous backup.
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: header: "content-type" (24)
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: created new fixed index 1 ("vm/100/2020-10-20T06:33:05Z/drive-scsi0.img.fidx")
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: add blob "/mnt/datastore/primary/vm/100/2020-10-20T06:33:05Z/qemu-server.conf.blob" (311 bytes, comp: 311)
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:06 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:07 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:07 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:09 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:09 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:09 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:09 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:10 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:11 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:11 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:11 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:11 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:11 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:11 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:15 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:16 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:16 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:16 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:16 backup proxmox-backup-proxy[27447]: header: "content-type" (30)
Oct 20 08:33:17 backup proxmox-backup-proxy[27447]: strip connection headers
Oct 20 08:33:17 backup proxmox-backup-proxy[27447]: header: "content-type" (30)

Small backtrace:
Code:
#0  <fnv::FnvHasher as core::hash::Hasher>::write (bytes=..., self=<optimized out>) at /usr/share/cargo/registry/fnv-1.0.6/lib.rs:106
#1  <http::header::name::Custom as core::hash::Hash>::hash (self=<optimized out>, hasher=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/name.rs:2109
#2  <http::header::name::Repr<T> as core::hash::Hash>::hash (self=<optimized out>, state=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/name.rs:45
#3  <http::header::name::HeaderName as core::hash::Hash>::hash (self=<optimized out>, state=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/name.rs:33
#4  http::header::map::hash_elem_using (danger=<optimized out>, k=<optimized out>) at /usr/share/cargo/registry/http-0.2.1/src/header/map.rs:3223
#5  0x000055a2b1f1b666 in http::header::map::HeaderMap<T>::find (self=0x7faa3c5c50f0, key=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/map.rs:1287
#6  <&http::header::name::HeaderName as http::header::map::as_header_name::Sealed>::find (map=0x7faa3c5c50f0, self=<optimized out>)
    at /usr/share/cargo/registry/http-0.2.1/src/header/map.rs:3389
#7  http::header::map::HeaderMap<T>::remove (self=0x7faa3c5c50f0, key=0x100000001b3) at /usr/share/cargo/registry/http-0.2.1/src/header/map.rs:1356
#8  0x000055a2b1f2a6f1 in hyper::proto::h2::strip_connection_headers (headers=0x7faa3c5c50f0, is_request=false)
    at /usr/share/cargo/registry/hyper-0.13.7/src/proto/h2/mod.rs:48
#9  0x000055a2b1c5b6d9 in hyper::proto::h2::server::H2Stream<F,B>::poll2 (self=..., cx=<optimized out>)
    at /usr/share/cargo/registry/hyper-0.13.7/src/proto/h2/server.rs:398
#10 <hyper::proto::h2::server::H2Stream<F,B> as core::future::future::Future>::poll (self=..., cx=0x7faa3c5c53d0)
    at /usr/share/cargo/registry/hyper-0.13.7/src/proto/h2/server.rs:437
#11 0x000055a2b1cab232 in tokio::runtime::task::core::Core<T,S>::poll::{{closure}} (ptr=<optimized out>)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/core.rs:173
#12 tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut (self=<optimized out>, f=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/loom/std/unsafe_cell.rs:14
#13 tokio::runtime::task::core::Core<T,S>::poll (self=0x7faa3005f7c0, header=0x7faa3005f790)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/core.rs:158
#14 tokio::runtime::task::harness::Harness<T,S>::poll::{{closure}} () at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/harness.rs:107
#15 core::ops::function::FnOnce::call_once () at /usr/src/rustc-1.45.0/src/libcore/ops/function.rs:232
#16 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>)
    at /usr/src/rustc-1.45.0/src/libstd/panic.rs:318
#17 0x000055a2b1bceb5c in std::panicking::try::do_call (data=<optimized out>) at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:297
#18 std::panicking::try (f=<error reading variable: access outside bounds of object referenced via synthetic pointer>)
    at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:274
#19 std::panic::catch_unwind (f=<error reading variable: access outside bounds of object referenced via synthetic pointer>)
    at /usr/src/rustc-1.45.0/src/libstd/panic.rs:394
#20 tokio::runtime::task::harness::Harness<T,S>::poll (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/harness.rs:89
#21 0x000055a2b1f7b330 in tokio::runtime::task::raw::RawTask::poll (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/raw.rs:66
#22 tokio::runtime::task::Notified<S>::run (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/mod.rs:169
#23 tokio::runtime::thread_pool::worker::Context::run_task::{{closure}} () at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:370
#24 tokio::coop::with_budget::{{closure}} (cell=0x7faa3c5c6482) at /usr/share/cargo/registry/tokio-0.2.21/src/coop.rs:127
#25 std::thread::local::LocalKey<T>::try_with (self=<optimized out>, f=...) at /usr/src/rustc-1.45.0/src/libstd/thread/local.rs:263
#26 std::thread::local::LocalKey<T>::with (self=<optimized out>, f=...) at /usr/src/rustc-1.45.0/src/libstd/thread/local.rs:239
#27 0x000055a2b1f6e7e2 in tokio::coop::with_budget (budget=..., f=...) at /usr/share/cargo/registry/tokio-0.2.21/src/coop.rs:120
#28 tokio::coop::budget (f=...) at /usr/share/cargo/registry/tokio-0.2.21/src/coop.rs:96
#29 tokio::runtime::thread_pool::worker::Context::run_task (self=<optimized out>, task=..., core=0x55a2b3745700)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:348
#30 0x000055a2b1f6dede in tokio::runtime::thread_pool::worker::Context::run (self=<optimized out>, core=0x55a2b3745700)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:320
#31 0x000055a2b1f7d115 in tokio::runtime::thread_pool::worker::run::{{closure}} ()
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:305
#32 tokio::macros::scoped_tls::ScopedKey<T>::set (self=<optimized out>, t=0x7faa3c5c5780, f=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/macros/scoped_tls.rs:63
#33 0x000055a2b1f6d9cb in tokio::runtime::thread_pool::worker::run (worker=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:302
#34 0x000055a2b1a49797 in tokio::runtime::thread_pool::worker::block_in_place::{{closure}}::{{closure}} ()
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/thread_pool/worker.rs:256
#35 <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll (self=..., _cx=<optimized out>)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/blocking/task.rs:38
#36 tokio::runtime::task::core::Core<T,S>::poll::{{closure}} (ptr=0x7faa38021f30) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/core.rs:173
#37 tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut (self=0x7faa38021f30, f=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/loom/std/unsafe_cell.rs:14
#38 0x000055a2b1cb108a in tokio::runtime::task::core::Core<T,S>::poll (self=<optimized out>, header=0x7faa38021f00)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/core.rs:158
#39 tokio::runtime::task::harness::Harness<T,S>::poll::{{closure}} () at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/harness.rs:107
#40 core::ops::function::FnOnce::call_once () at /usr/src/rustc-1.45.0/src/libcore/ops/function.rs:232
#41 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>)
    at /usr/src/rustc-1.45.0/src/libstd/panic.rs:318
#42 0x000055a2b1bcff98 in std::panicking::try::do_call (data=<optimized out>) at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:297
#43 std::panicking::try (f=<error reading variable: access outside bounds of object referenced via synthetic pointer>)
    at /usr/src/rustc-1.45.0/src/libstd/panicking.rs:274
#44 std::panic::catch_unwind (f=<error reading variable: access outside bounds of object referenced via synthetic pointer>)
    at /usr/src/rustc-1.45.0/src/libstd/panic.rs:394
#45 tokio::runtime::task::harness::Harness<T,S>::poll (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/harness.rs:89
#46 0x000055a2b1f6c917 in tokio::runtime::task::raw::RawTask::poll (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/raw.rs:66
#47 tokio::runtime::task::Notified<S>::run (self=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/task/mod.rs:169
#48 tokio::runtime::blocking::pool::Inner::run (self=0x55a2b3745990) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/blocking/pool.rs:230
#49 0x000055a2b1f8aa38 in tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}::{{closure}} ()
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/blocking/pool.rs:210
#50 tokio::runtime::context::enter (new=..., f=...) at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/context.rs:72
#51 0x000055a2b1f7ac6f in tokio::runtime::handle::Handle::enter (self=0x7faa3c5c5a20, f=...)
    at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/handle.rs:76
#52 tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}} () at /usr/share/cargo/registry/tokio-0.2.21/src/runtime/blocking/pool.rs:209
#53 std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /usr/src/rustc-1.45.0/src/libstd/sys_common/backtrace.rs:130
#54 0x000055a2b1f63c62 in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} () at /usr/src/rustc-1.45.0/src/libstd/thread/mod.rs:475
#55 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>)


Here is the core dump: https://cloud.nilsson.link/s/cNkdEeFz7dKJqnM (expires in a week)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!