Recent content by BlueMatt

  1. B

    Custom pve-kernel fails due to private git repo

    Trying to build a custom Proxmox kernel from https://github.com/proxmox/pve-kernel currently fails as it tries to fetch the ubuntu-kernel submodule from https://github.com/proxmox/mirror_ubuntu-kernels which 404s (presumably its a private repo). Is there a way to build a fresh pve-kernel with...
  2. B

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    kernelnewbies generally has a good list (ie https://kernelnewbies.org/Linux_6.17 https://kernelnewbies.org/Linux_6.16 and https://kernelnewbies.org/Linux_6.15), no need to use an LLM here.
  3. B

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    Lol your LLM is pretty bad. "Introduces initial Copy-on-Write support for Ext4" is my favorite (there is new atomic write support, but not CoW).
  4. B

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    Nice, edac memory error reporting also now works on Intel 12th-14th gen parts in W680 motherboards with ECC memory.
  5. B

    Slow memory leak in 6.8.12-13-pve

    Apparently I spoke too soon on the -ice-fix builds. The buffer_head slab allocations do rise for the first few hours, but eventually seem to flatten out. I see something like 300MB of used buffer_head slab allocations (per slabtop) which of course seems pretty excessive, but total system memory...
  6. B

    Slow memory leak in 6.8.12-13-pve

    As I posted last Thursday, I tested that as well with no change. That kernel does *not* contain the "ice: fix Rx page leak on multi-buffer frames" fix (which is the -ice-fix builds), though of course those also don't fix the issue.
  7. B

    Slow memory leak in 6.8.12-13-pve

    Just to confirm what others are seeing, the 6.14.11-2-ice-fix-1-pve kernel shows a similar rate of buffer_head slab leaking as other 6.14 kernels (with 9k MTU ceph on an ice card).
  8. B

    Slow memory leak in 6.8.12-13-pve

    After upgrading to 6.14.11-3-pve I still see `buffer_head` slab increasing slowly over time. Its obvious, but might be worth noting that the ice memory leak fix previously identified isn't in the above list.
  9. B

    ice(?) memory leak in Proxmox 9

    After upgrading to proxmox 9 my hosts that use the ice driver (ie Intel 25G NICs) for the ceph network (with jumbo frames) show a pretty quick memory leak (something like 10GB/day). I assume it was fixed in upstream kernel commit 84bf1ac85af84d354c7a2fdbdc0d4efc8aaec34b, any chance we could get...
  10. B

    LXC Containers with CephFS Mountpoints Fail to Start at Boot

    It seems the mount was just too late. The host came up at 00:34:34: Nov 12 00:34:34 rackbeast corosync[7218]: [QUORUM] This node is within the primary component and will provide service. Nov 12 00:34:34 rackbeast corosync[7218]: [QUORUM] Members[4]: 1 2 3 4 Nov 12 00:34:34 rackbeast...
  11. B

    LXC Containers with CephFS Mountpoints Fail to Start at Boot

    Yea, its not really a big deal for me, easy to work around as you point out, more a bug report than a seeking-help post :)
  12. B

    LXC Containers with CephFS Mountpoints Fail to Start at Boot

    Basically what the title says. I have a few LXC containers that have filesystem mount points pointing to CephFS filesystems to sync containers across hosts. Sadly, they always fail to start on boot (I assume cause CephFS is just slower to start than they are, they have no problem being started...
  13. B

    How to migrate VM from one PVE cluster to another

    Ironically it works for live migration, just not for offline migration. In any case, just managed to move a full cluster over a frustratingly slow WAN link, awesome feature! Took a week, but there were zero hiccups outside of having to comment out the checks that prevent migration if the VM was...
  14. B

    How to migrate VM from one PVE cluster to another

    This is an awesome feature, thanks y'all! Is there a way to get it to work with ceph? Currently trying to migrate with Ceph disks hits "ERROR: no export formats for 'cephtwo:vm-1046-disk-0' - check storage plugin support!"
  15. B

    Proxmox VE 8.0 released!

    It appears the 6.2 kernel in Proxmox 8 is much more conservative about keeping threads on the same CPU core/thread and not moving them around than the opt-in 6.1 kernel from Proxmox 7 was. This pretty easily causes individual CPU cores to hit thermal limiting. Is there some way to disable this...