Recent content by fiveangle

  1. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    I believe something indeed has occurred within the LXC or host itself as i have noticed during troubleshooting that the builtin console view of PBS, when running btop while a backup is running, just stops at some point, requiring a browser tab refresh to re-attach to, but there isn't anything...
  2. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    Is there a timeout lever somewhere in the clien that could be fiddled with ? Perhaps things have fallen into an condition where the age-old value is getting bumped up against ?
  3. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    PBS is running as an LXC on the PVE host. I'm not sure how much "closer" it can be.
  4. F

    tokio-runtime-worker' panicked…called `Result::unwrap()` on an `Err` value: SendError { .. }

    Since upgrading to PBS 4.1.4-1 (from 4.1.2-1) I've been getting a nightly fail on an LXC, where it just bombs out partially through starting with a HTTP/2.0 connection failed entry. I tested a full manual backup to local storage of the LXC and it completed successfully. The nightly backup...
  5. F

    [SOLVED] Wireguard and DNS within LXC Container

    BTW- all that business is outdated I believe since, what ? 8.something ? PVE kernel has wg built-in so no need to install stuff in the host (sounds like you've got that going, but if you did install wireguard-dkms on the PVE host, maybe that's messing with your config ?) To deploy for example...
  6. F

    Docker on LXC slow at startup

    Man, I was all excited to finally "fix" this, but guess there must be alternate reasons for this long pause/timeout on docker start, as my interfaces is just the vanilla/correct contents for the LXC: root@portainer:/etc/network# cat interfaces auto lo iface lo inet loopback auto eth0 iface...
  7. F

    Proxmox 8.3.0 and LSI RAID Cards

    Only thing that could have been useful here is the actual error logs :eyes: Not even a mention of the LSI controller used either, but if old enough, perhaps this bug may not outright crash during initialization if the the device is passed-thru to a VM with a software-defined KVM CPU featureset...
  8. F

    Proxmox VE 8.4 released!

    I'd rather resources spun on an uid/gid resource mapping interface integrated into PVE… wouldn't that be glorious ? :dream:
  9. F

    Unable to move LXC container disk(s) to another storage / rsync error:11 / quota exceeded

    Yeah, why I posted it for you above: [...] ########the following command "resizes" the virtual disk size########## root@richie:~# zfs set refquota=8G rpool/data/subvol-122-disk-1 <=======this command [...] ########the following command instructs pvesm to realize the new size by updating the...
  10. F

    Unable to move LXC container disk(s) to another storage / rsync error:11 / quota exceeded

    UPDATE: so in my case (and yours) the target vol indeed filled up before the move could complete. Turns out the size=nG spec for LXC rootfs/mpX virtual disks stored on a zfs-backed PVE storage pool references only the literal amount of physical blocks that the the LXC's virtual disk consumes...
  11. F

    Unable to move LXC container disk(s) to another storage / rsync error:11 / quota exceeded

    I see this thread died, but I'm encountering a similar issue. The only difference is moving LXC volume from zfs datastore to lvm-thin datastore, but otherwise identical results, while plenty of free space on /var/lib/lxc/ Please post back if you found a solution, but assume you likely backed up...
  12. F

    Resize LXC DISK on Proxmox

    Correct. And also for completeness, to shrink an LXC disk on zfs: zfs set quota=<new size> <disk dataset> zfs set refquota=<new size> <disk dataset> pct rescan Example: root@richie:~# grep subvol /etc/pve/lxc/999.conf rootfs: local-zfs:subvol-999-disk-0,mountoptions=noatime,size=4G...
  13. F

    How to install Win11 in Proxmox | Quick guide | And fix problems of network search

    As of today (Win11 24H2), the Virtio-SCSI storage driver is: virtio-win/vioscsi/w11/amd64 And the memory ballooning driver is: virtio-win/Ballon/w11/amd64
  14. F

    Slim down Promxmox? Disable corosync, pve-ha services?

    [Not a hijack, just wanted to thank @t.lamprecht for this valuable info] I found this post after thinking this fresh PVE install I did was broken. I <3 the idea of an efficient unified index of of events rather than the age-old practice of storing massive amounts of free-form text to wade...