Search results

  1. T

    [TUTORIAL] How to upgrade LXC containers from buster to bullseye

    Solution: Same upgrade process applies here as well. - Backup - Test if backup works - adjust the amount of memory of the containers (I tried with 1024MB) - update the apt sources deb http://ftp.debian.org/debian bullseye main contrib deb http://ftp.debian.org/debian bullseye-updates main...
  2. T

    Proxmox VE 6.4 available

    Thanks, I'll just wait for it. I don't know, if its appropriate here to ask, but how can I safely remove the 5.11 meta package? If I try to remove it, it tries to remove all dependancies, including "proxmox-ve"...
  3. T

    Proxmox VE 6.4 available

    I see, I thought the new 5.11 update you mentioned wasn't a beta, since I could install the 5.11 without the pve-test repository... Is it the only way to update the 5.11 kernel or will it with time come to the non-test repository?
  4. T

    Proxmox VE 6.4 available

    What am I doing wrong? That update didn't show up, wenn I do apt update. Yesterday I updated my server with apt dist-upgrade -y; apt autoremove -y
  5. T

    Proxmox VE 6.4 available

    There were updates for the kernel pve-kernel-5.4.114-1 yesterday, but not for pve-kernel-5.11. You said, we could opt-in for 5.11 in the change logs. Why isn't the optional kernel updated?
  6. T

    recommendations for zfs pool

    You don't want to user consumer SSDs as any backup solution. With ZFS they'll get eaten very fast. Ergo you answered your own question actually.
  7. T

    Proxmox VE 6.4 available

    How can I opt-in for kernel 5.11 while upgrading from pve 6.3?
  8. T

    [SOLVED] LXC container :: How to enable TCP BBR?

    sysctl net.ipv4.tcp_congestion_control=bbr or via /proc/sys//proc/sys/net/ipv4/tcp_congestion_control
  9. T

    e1000 driver hang

    That it works (without auto eno2) on my hardware contradicts his assumption. If the command above seems to change the settings, you could build a systemd service around it and let it run on the boot.
  10. T

    e1000 driver hang

    I mean, if you can't turn off the features with ethtool -K eno1 tso off gso off changing interface settings won't do much. Edit: Maybe take a look at "dmesg" after you try the command above.
  11. T

    e1000 driver hang

    On the vmbr0 post up, you used eno1, that can't work. $IFACE should do it. Maybe your hardware or firmware doesn't support it...
  12. T

    e1000 driver hang

    Does it work, if you set the post-up to both (the virtual as well) interfaces?
  13. T

    e1000 driver hang

    It works here:
  14. T

    e1000 driver hang

    PVE tries to read / hold on to that file. Try " systemctl restart networking.service " instead of if down and if up. Edit: P.S. you use &&, it means it won't run the second part, if the first one fails. That may be the issue here.
  15. T

    e1000 driver hang

    No, it doesn't work like that. I just have more than 2 physical NICs. I just added that post-up to one interface, that I use for Proxmox host.
  16. T

    e1000 driver hang

    Here is the working config for me (just set to physical interface "eno2" which serves the pve host): # network interface settings; autogenerated # Please do NOT modify this file directly, unless you know what # you're doing. # # If you want to manage parts of the network configuration manually...
  17. T

    e1000 driver hang

    You have to restart networking in the VMs or LXCs as well, better restart them, after changing /etc/networking/interfaces
  18. T

    e1000 driver hang

    I can replicate the issue easily if I passthrough one NIC to a Windows-VM and let it just boot. No heavy use, no large files. Just try RDP into the VM and voila. Just passthrough the NIC and it would crash the kernel driver. After this workaround there hasn't been any crash.
  19. T

    e1000 driver hang

    I can confirm, that it resolves the issue, although it is just a workaround.