Search results

  1. J

    new ubuntu release

    It officially releases today, but I can’t be the only user who would have preferred to test the beta before full deployment. It’s surprising that LXC support isn’t implemented prior to an LTS official release.
  2. J

    Proxmox VE 8.2 released!

    LXC support is still missing for Ubuntu 24.04. When will this be added? (ie inclusion in /usr/share/perl5/PVE/LXC/Setup/Ubuntu.pm)
  3. J

    [SOLVED] Raidz expansion

    It was merged into master. It hasn't been cut into any release and won't but until the next major point release. RAIDZ Expansion isn't available yet in any stable ZFS release build.
  4. J

    Caching for ceph writes ?

    For write endurance, 99th percentile latency, overall latency, and 4K random write performance.
  5. J

    Caching for ceph writes ?

    Yes, and I would suggest optane or nothing. Ceph latency is always going to be slow since a write is acknowledged until the last write is committed, which over a network is several latent hops. With the RBD persistent cache the write is considered committed as soon as it hits the local, on node...
  6. J

    Caching for ceph writes ?

    Have you tested RBD persistent writeback caching?
  7. J

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    I had similar issues. Eventually it boots past the EFI stub, but then scrolls endless mptsas3 errors, presumably related to the LSI SAS HBA. Rock solid on ZFS 2.1.13 and kernel 6.2. I rolled everything back and was fine.
  8. J

    [SOLVED] Raidz expansion

    The commit hasn’t been merged. You would need to pull and build your own version to test.
  9. J

    [SOLVED] Can only use 7TB from newly created 12TB ZFS Pool?

    Two problems. (1) You are basing your capacity on manufacture quoted capacity which is in TB but the OS is representing the space in TiB. (2) You aren’t calculating in other ZFS overhead. You should only have about 10.2 TiB of usable capacity.
  10. J

    Change WAL and DB location for running (slow) OSD's

    You don’t have to recreate the OSD to move the WAL/DB. No need to rebalance.
  11. J

    Low amount of ZFS ARC hits

    All datasets and zvol are set to “all”.
  12. J

    Low amount of ZFS ARC hits

    Running ZFS on Proxmox 8. Everything is updated to newest packages. The ARC limit is 768 GB and arc_summary reports 99% hit ratio. However, actual total ARC access is only 350mb after 24 hours uptime on a loaded system. Any reason for such low utilization of ARC?
  13. J

    [SOLVED] kvm_nx_huge_page_recovery_worker message in log...

    Seeing same behavior on 6.2.16-3. Jul 02 14:34:07 maverick kernel: ------------[ cut here ]------------ Jul 02 14:34:07 maverick kernel: WARNING: CPU: 21 PID: 26736 at arch/x86/kvm/mmu/mmu.c:6949 kvm_nx_huge_page_recovery_worker+0x3c4/0x410 [kvm] Jul 02 14:34:07 maverick kernel: Modules linked...
  14. J

    RBD persistent cache support

    DAX is just for Optane DCPMM and NVDIMMs. The SSD mode works great with a few caveats.
  15. J

    Update Systemd?

    So far so good. 26 hours error free. Thanks again!
  16. J

    Update Systemd?

    I will test the backport. Thank you!
  17. J

    Update Systemd?

    I have a problem with really long nvme device names in ZFS pools causing systemd errors. The error is: Jun 05 11:44:09 maverick systemd[22172]: zd0p3: Failed to generate unit name from device path: File name too long A fix was committed to v250, but Proxmox is using v247. Current release of...
  18. J

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    Was this ever resolved? I made the potential mistake of ordering a p4608 and now I see it may not work on kernel 6.2. Was a patch ever cherry picked?
  19. J

    Opt-in Linux 6.2 Kernel for Proxmox VE 7.x available

    Any testing yet on the new Call Depth Tracking?
  20. J

    Where is my 1.2TB goes?

    First, drive capacity is marketed using powers of 10 but operating systems measure storage using powers of 2. A 1TB drive will never format to 1TB usable space. As you are aware, RAIDZ1 provides usable capacity for N-1 drives. You also have ZFS overhead. Hence, (4-1) * 0.93 = ~2.79. Proxmox...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!