Recent content by fiveangle

  1. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    I wrote earlier: …and i still have no clue how to do so. What "environment" do backup jobs that call vzdump inside an LXC run in ? vzdump runs inside the LXC, triggered from PVE sending the LXC scheduled for backup to the PBS datastore the commands to run within the context of the LXC itself...
  2. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    I still have on my todo list to restore the pbs to its previous failing state and run thru the pbs updates to determine if a pbs fix resolved and will report back for posterity
  3. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    You can see here from the snippet that I posted at start of this thread that the job fires PBC within the context of the LXC, and after a day and a half of trying (and failing) couldn’t find how to enable extra verbose logging for these scheduled jobs: INFO: run: lxc-usernsexec -m...
  4. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    The question isn’t how to increase logging on Proxmox-backup-client. Who uses that to backup PVE ? These backups are typical scheduled backups that are fired off within the context of the lxc thru a layer of obfuscation that there seems to be no documentation of. “PBS_LOG=debug” is also...
  5. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    i don't have a deep testing to confirm it was scanopy, so it is certainly possible a pbs update was applied between times that it had confirmed it was still failing, and when i removed scanopy and tested again (at a later date). I still wish someone had information on how to debug these types...
  6. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    I believe this issue is related to some incompatibility with an update to the open-source scanopy network monitoring framework (thank goodness). We disabled scanopy and the network hiccups causing pbs jobs to disconnect ceased. Be forewarned for those of you running scanopy, but good news to...
  7. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    I believe something indeed has occurred within the LXC or host itself as i have noticed during troubleshooting that the builtin console view of PBS, when running btop while a backup is running, just stops at some point, requiring a browser tab refresh to re-attach to, but there isn't anything...
  8. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    Is there a timeout lever somewhere in the clien that could be fiddled with ? Perhaps things have fallen into an condition where the age-old value is getting bumped up against ?
  9. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    PBS is running as an LXC on the PVE host. I'm not sure how much "closer" it can be.
  10. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    Since upgrading to PBS 4.1.4-1 (from 4.1.2-1) I've been getting a nightly fail on an LXC, where it just bombs out partially through starting with a HTTP/2.0 connection failed entry. I tested a full manual backup to local storage of the LXC and it completed successfully. The nightly backup...
  11. F

    [SOLVED] Wireguard and DNS within LXC Container

    BTW- all that business is outdated I believe since, what ? 8.something ? PVE kernel has wg built-in so no need to install stuff in the host (sounds like you've got that going, but if you did install wireguard-dkms on the PVE host, maybe that's messing with your config ?) To deploy for example...
  12. F

    Docker on LXC slow at startup

    Man, I was all excited to finally "fix" this, but guess there must be alternate reasons for this long pause/timeout on docker start, as my interfaces is just the vanilla/correct contents for the LXC: root@portainer:/etc/network# cat interfaces auto lo iface lo inet loopback auto eth0 iface...
  13. F

    Proxmox 8.3.0 and LSI RAID Cards

    Only thing that could have been useful here is the actual error logs :eyes: Not even a mention of the LSI controller used either, but if old enough, perhaps this bug may not outright crash during initialization if the the device is passed-thru to a VM with a software-defined KVM CPU featureset...
  14. F

    Proxmox VE 8.4 released!

    I'd rather resources spun on an uid/gid resource mapping interface integrated into PVE… wouldn't that be glorious ? :dream:
  15. F

    Unable to move LXC container disk(s) to another storage / rsync error:11 / quota exceeded

    Yeah, why I posted it for you above: [...] ########the following command "resizes" the virtual disk size########## root@richie:~# zfs set refquota=8G rpool/data/subvol-122-disk-1 <=======this command [...] ########the following command instructs pvesm to realize the new size by updating the...