Search results

  1. J

    Caching for ceph writes ?

    For write endurance, 99th percentile latency, overall latency, and 4K random write performance.
  2. J

    Caching for ceph writes ?

    Yes, and I would suggest optane or nothing. Ceph latency is always going to be slow since a write is acknowledged until the last write is committed, which over a network is several latent hops. With the RBD persistent cache the write is considered committed as soon as it hits the local, on node...
  3. J

    Caching for ceph writes ?

    Have you tested RBD persistent writeback caching?
  4. J

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    I had similar issues. Eventually it boots past the EFI stub, but then scrolls endless mptsas3 errors, presumably related to the LSI SAS HBA. Rock solid on ZFS 2.1.13 and kernel 6.2. I rolled everything back and was fine.
  5. J

    [SOLVED] Raidz expansion

    The commit hasn’t been merged. You would need to pull and build your own version to test.
  6. J

    [SOLVED] Can only use 7TB from newly created 12TB ZFS Pool?

    Two problems. (1) You are basing your capacity on manufacture quoted capacity which is in TB but the OS is representing the space in TiB. (2) You aren’t calculating in other ZFS overhead. You should only have about 10.2 TiB of usable capacity.
  7. J

    Change WAL and DB location for running (slow) OSD's

    You don’t have to recreate the OSD to move the WAL/DB. No need to rebalance.
  8. J

    Low amount of ZFS ARC hits

    All datasets and zvol are set to “all”.
  9. J

    Low amount of ZFS ARC hits

    Running ZFS on Proxmox 8. Everything is updated to newest packages. The ARC limit is 768 GB and arc_summary reports 99% hit ratio. However, actual total ARC access is only 350mb after 24 hours uptime on a loaded system. Any reason for such low utilization of ARC?
  10. J

    [SOLVED] kvm_nx_huge_page_recovery_worker message in log...

    Seeing same behavior on 6.2.16-3. Jul 02 14:34:07 maverick kernel: ------------[ cut here ]------------ Jul 02 14:34:07 maverick kernel: WARNING: CPU: 21 PID: 26736 at arch/x86/kvm/mmu/mmu.c:6949 kvm_nx_huge_page_recovery_worker+0x3c4/0x410 [kvm] Jul 02 14:34:07 maverick kernel: Modules linked...
  11. J

    RBD persistent cache support

    DAX is just for Optane DCPMM and NVDIMMs. The SSD mode works great with a few caveats.
  12. J

    Update Systemd?

    So far so good. 26 hours error free. Thanks again!
  13. J

    Update Systemd?

    I will test the backport. Thank you!
  14. J

    Update Systemd?

    I have a problem with really long nvme device names in ZFS pools causing systemd errors. The error is: Jun 05 11:44:09 maverick systemd[22172]: zd0p3: Failed to generate unit name from device path: File name too long A fix was committed to v250, but Proxmox is using v247. Current release of...
  15. J

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    Was this ever resolved? I made the potential mistake of ordering a p4608 and now I see it may not work on kernel 6.2. Was a patch ever cherry picked?
  16. J

    Opt-in Linux 6.2 Kernel for Proxmox VE 7.x available

    Any testing yet on the new Call Depth Tracking?
  17. J

    Where is my 1.2TB goes?

    First, drive capacity is marketed using powers of 10 but operating systems measure storage using powers of 2. A 1TB drive will never format to 1TB usable space. As you are aware, RAIDZ1 provides usable capacity for N-1 drives. You also have ZFS overhead. Hence, (4-1) * 0.93 = ~2.79. Proxmox...
  18. J

    QEMU 7.2 available on pvetest as of now

    Good question. If you enable persistent cache at the Ceph.conf level it will by default apply it to all disks. I have subsequently disabled it on the TPM and EFI disks. It is now enabled only on VM data disks.
  19. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    You don't passthrough to containers. You passthrough to a VM. For LXC containers, you need drivers on the host and the guest LXC containers. The container is given access to the resource via cgroups v2.
  20. J

    QEMU 7.2 available on pvetest as of now

    Live migration works fine. I haven’t had an unsafe shutdown to test but writeback cache is safe and there are commands to flush or invalidate cache in the event of such a crash.