Search results

  1. T

    Linux Kernel 5.13, ZFS 2.1 for Proxmox VE

    Also a warning, see https://forum.proxmox.com/threads/update-to-5-11-22-7-pve-causes-zfs-issues.99401/
  2. T

    Update to 5.11.22-7-pve causes zfs issues

    Hi Thomas, yes, this is PVE 7.0-14 from the community repo with following kernel and zfs: root@Proxmox:~# pveversion pve-manager/7.0-14/a9dbe7e3 (running kernel: 5.11.22-7-pve) root@Proxmox:~# uname -r 5.11.22-7-pve root@Proxmox:~# zfs -V zfs-2.1.1-pve1 zfs-kmod-2.0.6-pve1 root@Proxmox:~#...
  3. T

    Update to 5.11.22-7-pve causes zfs issues

    Hello everyone, I just updated my server from 5.11.22-5-pve to 5.11.22-7-pve , and have issues afterwards with the commands "arcstat" and "arc_summary". They throw following errors now: root@Proxmox:~# arcstat time read miss miss% dmis dm% pmis pm% mmis mm% size c avail...
  4. T

    vTPM support - do we have guide to add the vTPM support?

    When will the pve-manager (7.0-13) be released to the community repository?
  5. T

    [TUTORIAL] How to upgrade LXC containers from buster to bullseye

    Yes, that was it. The dpkg process gets killed by oom. I bumped up the amount of memory and updated successfully. I'll update the post above...
  6. T

    [TUTORIAL] How to upgrade LXC containers from buster to bullseye

    Sure, here is a simple container with grafana installation: arch: amd64 cores: 1 hostname: Grafana memory: 128 net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=AE:D7:85:02:0C:0F,ip=dhcp,type=veth onboot: 1 ostype: debian rootfs: Containers:subvol-107-disk-1,size=4G swap: 128 unprivileged: 1 Here...
  7. T

    [TUTORIAL] How to upgrade LXC containers from buster to bullseye

    Solution: Same upgrade process applies here as well. - Backup - Test if backup works - adjust the amount of memory of the containers (I tried with 1024MB) - update the apt sources deb http://ftp.debian.org/debian bullseye main contrib deb http://ftp.debian.org/debian bullseye-updates main...
  8. T

    Proxmox VE 6.4 available

    Thanks, I'll just wait for it. I don't know, if its appropriate here to ask, but how can I safely remove the 5.11 meta package? If I try to remove it, it tries to remove all dependancies, including "proxmox-ve"...
  9. T

    Proxmox VE 6.4 available

    I see, I thought the new 5.11 update you mentioned wasn't a beta, since I could install the 5.11 without the pve-test repository... Is it the only way to update the 5.11 kernel or will it with time come to the non-test repository?
  10. T

    Proxmox VE 6.4 available

    What am I doing wrong? That update didn't show up, wenn I do apt update. Yesterday I updated my server with apt dist-upgrade -y; apt autoremove -y
  11. T

    Proxmox VE 6.4 available

    There were updates for the kernel pve-kernel-5.4.114-1 yesterday, but not for pve-kernel-5.11. You said, we could opt-in for 5.11 in the change logs. Why isn't the optional kernel updated?
  12. T

    recommendations for zfs pool

    You don't want to user consumer SSDs as any backup solution. With ZFS they'll get eaten very fast. Ergo you answered your own question actually.
  13. T

    Proxmox VE 6.4 available

    How can I opt-in for kernel 5.11 while upgrading from pve 6.3?
  14. T

    [SOLVED] LXC container :: How to enable TCP BBR?

    sysctl net.ipv4.tcp_congestion_control=bbr or via /proc/sys//proc/sys/net/ipv4/tcp_congestion_control
  15. T

    e1000 driver hang

    That it works (without auto eno2) on my hardware contradicts his assumption. If the command above seems to change the settings, you could build a systemd service around it and let it run on the boot.
  16. T

    e1000 driver hang

    I mean, if you can't turn off the features with ethtool -K eno1 tso off gso off changing interface settings won't do much. Edit: Maybe take a look at "dmesg" after you try the command above.
  17. T

    e1000 driver hang

    On the vmbr0 post up, you used eno1, that can't work. $IFACE should do it. Maybe your hardware or firmware doesn't support it...
  18. T

    e1000 driver hang

    Does it work, if you set the post-up to both (the virtual as well) interfaces?