Search results

  1. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    I wrote earlier: …and i still have no clue how to do so. What "environment" do backup jobs that call vzdump inside an LXC run in ? vzdump runs inside the LXC, triggered from PVE sending the LXC scheduled for backup to the PBS datastore the commands to run within the context of the LXC itself...
  2. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    I still have on my todo list to restore the pbs to its previous failing state and run thru the pbs updates to determine if a pbs fix resolved and will report back for posterity
  3. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    You can see here from the snippet that I posted at start of this thread that the job fires PBC within the context of the LXC, and after a day and a half of trying (and failing) couldn’t find how to enable extra verbose logging for these scheduled jobs: INFO: run: lxc-usernsexec -m...
  4. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    The question isn’t how to increase logging on Proxmox-backup-client. Who uses that to backup PVE ? These backups are typical scheduled backups that are fired off within the context of the lxc thru a layer of obfuscation that there seems to be no documentation of. “PBS_LOG=debug” is also...
  5. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    i don't have a deep testing to confirm it was scanopy, so it is certainly possible a pbs update was applied between times that it had confirmed it was still failing, and when i removed scanopy and tested again (at a later date). I still wish someone had information on how to debug these types...
  6. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    I believe this issue is related to some incompatibility with an update to the open-source scanopy network monitoring framework (thank goodness). We disabled scanopy and the network hiccups causing pbs jobs to disconnect ceased. Be forewarned for those of you running scanopy, but good news to...
  7. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    I believe something indeed has occurred within the LXC or host itself as i have noticed during troubleshooting that the builtin console view of PBS, when running btop while a backup is running, just stops at some point, requiring a browser tab refresh to re-attach to, but there isn't anything...
  8. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    Is there a timeout lever somewhere in the clien that could be fiddled with ? Perhaps things have fallen into an condition where the age-old value is getting bumped up against ?
  9. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    PBS is running as an LXC on the PVE host. I'm not sure how much "closer" it can be.
  10. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    Since upgrading to PBS 4.1.4-1 (from 4.1.2-1) I've been getting a nightly fail on an LXC, where it just bombs out partially through starting with a HTTP/2.0 connection failed entry. I tested a full manual backup to local storage of the LXC and it completed successfully. The nightly backup...
  11. F

    [SOLVED] Wireguard and DNS within LXC Container

    BTW- all that business is outdated I believe since, what ? 8.something ? PVE kernel has wg built-in so no need to install stuff in the host (sounds like you've got that going, but if you did install wireguard-dkms on the PVE host, maybe that's messing with your config ?) To deploy for example...
  12. F

    Docker on LXC slow at startup

    Man, I was all excited to finally "fix" this, but guess there must be alternate reasons for this long pause/timeout on docker start, as my interfaces is just the vanilla/correct contents for the LXC: root@portainer:/etc/network# cat interfaces auto lo iface lo inet loopback auto eth0 iface...
  13. F

    Proxmox 8.3.0 and LSI RAID Cards

    Only thing that could have been useful here is the actual error logs :eyes: Not even a mention of the LSI controller used either, but if old enough, perhaps this bug may not outright crash during initialization if the the device is passed-thru to a VM with a software-defined KVM CPU featureset...
  14. F

    Proxmox VE 8.4 released!

    I'd rather resources spun on an uid/gid resource mapping interface integrated into PVE… wouldn't that be glorious ? :dream:
  15. F

    Unable to move LXC container disk(s) to another storage / rsync error:11 / quota exceeded

    Yeah, why I posted it for you above: [...] ########the following command "resizes" the virtual disk size########## root@richie:~# zfs set refquota=8G rpool/data/subvol-122-disk-1 <=======this command [...] ########the following command instructs pvesm to realize the new size by updating the...
  16. F

    Unable to move LXC container disk(s) to another storage / rsync error:11 / quota exceeded

    UPDATE: so in my case (and yours) the target vol indeed filled up before the move could complete. Turns out the size=nG spec for LXC rootfs/mpX virtual disks stored on a zfs-backed PVE storage pool references only the literal amount of physical blocks that the the LXC's virtual disk consumes...
  17. F

    Unable to move LXC container disk(s) to another storage / rsync error:11 / quota exceeded

    I see this thread died, but I'm encountering a similar issue. The only difference is moving LXC volume from zfs datastore to lvm-thin datastore, but otherwise identical results, while plenty of free space on /var/lib/lxc/ Please post back if you found a solution, but assume you likely backed up...
  18. F

    Resize LXC DISK on Proxmox

    Correct. And also for completeness, to shrink an LXC disk on zfs: zfs set quota=<new size> <disk dataset> zfs set refquota=<new size> <disk dataset> pct rescan Example: root@richie:~# grep subvol /etc/pve/lxc/999.conf rootfs: local-zfs:subvol-999-disk-0,mountoptions=noatime,size=4G...
  19. F

    How to install Win11 in Proxmox | Quick guide | And fix problems of network search

    As of today (Win11 24H2), the Virtio-SCSI storage driver is: virtio-win/vioscsi/w11/amd64 And the memory ballooning driver is: virtio-win/Ballon/w11/amd64