Search results

  1. A

    [SOLVED] pvecm updatecert -f - not working

    I spent some hours debugging this issue, was getting crazy, and I solved it on my 3 nodes cluster this way: https://forum.proxmox.com/threads/cant-connect-to-destination-address-using-public-key-task-error-migration-aborted.42390/post-619486 Hope it doesn't trigger again.
  2. A

    [SOLVED] Can't connect to destination address using public key TASK ERROR: migration aborted

    Thanks for your post. It worked except for a GUI problem (connection error when managing other nodes), so I had to add a service restart. In case someone else has the same issue, I ran these commands on all my 3 nodes: cd /root/.ssh mv id_rsa id_rsa.old mv id_rsa.pub id_rsa.pub.old mv config...
  3. A

    e1000e:reset adapter unexpectedly

    No need to replace it, it's an env variable that contains the interface for the scripts in /etc/network/if-*.d Every time an interface comes up or goes down, scripts in those dirs are executed and that variable is set with the inteface name. So now I'm using this, and it works perfectly: auto...
  4. A

    Backups Failing on all containers/VMs: "HTTP/2.0 connection failed"

    You were right, it was the segment offload bug of the e1000 drivers. I used Thomas recommendation to disable it completely: https://forum.proxmox.com/threads/e1000e-reset-adapter-unexpectedly.87769/post-384609 auto lo iface lo inet loopback auto eno1 iface eno1 inet static address...
  5. A

    Backup always failing on one specific CT

    But I don't have any reset events for the NIC in dmesg log. Anyway, the offload issue seems plausible, it would explain the apparent randomness of the issue. There's also this post of Thomas to disable all offload...
  6. A

    e1000e:reset adapter unexpectedly

    Thomas, how is the $IFACE variable expanded? There's a post-up script that processes that line and has it defined? Thanks.
  7. A

    LXC ZFS + docker overlay2 driver

    That's one of the old workarounds, it's not what I wanted to achieve. You need to check why on the other 2 dockers it worked with full ZFS and on this one it didn't.
  8. A

    LXC ZFS + docker overlay2 driver

    You removed the whole folder with subfolders?? I think you screwed up docker installation. :) You had to only remove the driver from the json file, not the entire folder. I don't know how docker is starting without that folder.
  9. A

    LXC ZFS + docker overlay2 driver

    looks like you have a driver configured, remove it. clean all the customizations you've done, docker installation has to be as clean as possible. If you can't do it, reinstall docker.
  10. A

    LXC ZFS + docker overlay2 driver

    don't specify the driver (remove the daemon.json setting) and remove fuse=1. If you use the latest kernels docker should finally choose the overlay2 driver instead of the obsolete and inefficient vfs driver. I use privileged containers, you seem to use both privileged and unprivileged, you need...
  11. A

    LXC ZFS + docker overlay2 driver

    I simply unrolled what I did following this guide: https://c-goes.github.io/posts/proxmox-lxc-docker-fuse-overlayfs/
  12. A

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Before 6.1, with the zfs LXC, the default storage driver would be the terrible vfs. So the only way was changing docker's config to use fuse-overlayfs. I installed 6.1, uninstalled fuse-overlayfs in the LXC, rebooted the pve node and then to my surprise in the LXC docker container I see this: ❯...
  13. A

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    I upgraded my 2 PVE nodes to 6.1.10 and I noticed my docker v23 LXC with ZFS finally works with the overlay2 storage driver. No more fuse-overlayfs. Really nice. I wonder if someone could point me to what has changed at kernel level to finally allow support for this. Thanks in advance.
  14. A

    LXC ZFS + docker overlay2 driver

    I had Docker v23 with 5.x Kernel, and it never worked, it always reverted to vfs so I used fuse-overlayfs. I upgraded PVE nodes kernel to 6.1.10 (it's opt-in for now), removed fuse-overlayfs, and after rebooting the nodes, the docker LXC container on ZFS finally showed the proper support for...
  15. A

    Backups Failing on all containers/VMs: "HTTP/2.0 connection failed"

    @Matthias. would be interesting to know which update addressed the issue. Where can I see a detail of the updates of proxmox backup client (maybe also the server)? I can't see a dropdown near the title nor the first post. Where is it? Maybe in the threads view?
  16. A

    Backups Failing on all containers/VMs: "HTTP/2.0 connection failed"

    @Matthias. just to let you know that I tried again today, all PVE nodes updated and PBS too, and it finally all backups completed without errors. :) Didn't change anything at hw level, I don't know which of the updates of last month did the trick, but it worked. Here's the current versions of...
  17. A

    Backups Failing on all containers/VMs: "HTTP/2.0 connection failed"

    This Intel NUC was previously a PVE node, but I converted it to PBS. Never had any hw issue. I switched cables, changed switch (I have 2 switches). No tx/rx errors on the switch ports. Tried a CAT6 (high quality) cable. That's why I'm frustrated...don't know what else I can try. The only...
  18. A

    Backups Failing on all containers/VMs: "HTTP/2.0 connection failed"

    Hi Matthias. I regenerated the keys as instructed. Removed datastore and backup job, recreated datastore and job, ran it manually and same issue. :( Since backups to Tuxis datastore are working fine, I thought the issue was the PBS installation, so I decided to reinstall PBS from scratch on the...
  19. A

    Backups Failing on all containers/VMs: "HTTP/2.0 connection failed"

    one other strange thing is that I have that backup scheduled, that gives errors, and another backup, the remote one, on Tuxis, and the remote one is perfect, never an error, and I backup everything like the local one with errors. does it give you some more info about what could the issue be?