Search results

  1. J

    Update Systemd?

    So far so good. 26 hours error free. Thanks again!
  2. J

    Update Systemd?

    I will test the backport. Thank you!
  3. J

    Update Systemd?

    I have a problem with really long nvme device names in ZFS pools causing systemd errors. The error is: Jun 05 11:44:09 maverick systemd[22172]: zd0p3: Failed to generate unit name from device path: File name too long A fix was committed to v250, but Proxmox is using v247. Current release of...
  4. J

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    Was this ever resolved? I made the potential mistake of ordering a p4608 and now I see it may not work on kernel 6.2. Was a patch ever cherry picked?
  5. J

    Opt-in Linux 6.2 Kernel for Proxmox VE 7.x available

    Any testing yet on the new Call Depth Tracking?
  6. J

    Where is my 1.2TB goes?

    First, drive capacity is marketed using powers of 10 but operating systems measure storage using powers of 2. A 1TB drive will never format to 1TB usable space. As you are aware, RAIDZ1 provides usable capacity for N-1 drives. You also have ZFS overhead. Hence, (4-1) * 0.93 = ~2.79. Proxmox...
  7. J

    QEMU 7.2 available on pvetest as of now

    Good question. If you enable persistent cache at the Ceph.conf level it will by default apply it to all disks. I have subsequently disabled it on the TPM and EFI disks. It is now enabled only on VM data disks.
  8. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    You don't passthrough to containers. You passthrough to a VM. For LXC containers, you need drivers on the host and the guest LXC containers. The container is given access to the resource via cgroups v2.
  9. J

    QEMU 7.2 available on pvetest as of now

    Live migration works fine. I haven’t had an unsafe shutdown to test but writeback cache is safe and there are commands to flush or invalidate cache in the event of such a crash.
  10. J

    QEMU 7.2 available on pvetest as of now

    Works when disabling persistent cache on the rbd images. Curiously only effects virtio-blk not virtio-scsi.
  11. J

    QEMU 7.2 available on pvetest as of now

    Ceph is healthy. CephRBD is Replica 3 on 3x nodes with 7x Samsung sm863 OSD per node (21 total). WAL/DB is Optane 900p. RBD persistent write-back cache is also on Optane. This only happens on virtio block. It does not happen on virtio-scsi. I am using librbd because krbd does not support...
  12. J

    QEMU 7.2 available on pvetest as of now

    I receive the following error when starting a VM. task started by HA resource agent kvm: rbd request failed: cmd 0 offset 0 bytes 540672 flags 0 task.ret -2 (No such file or directory) kvm: can't read block backend: No such file or directory TASK ERROR: start failed: QEMU exited with code 1
  13. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    I have a problem that just started with the updates pushed to the no-subscription repo overnight. QEMU won't start. task started by HA resource agent terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of_buffer' what(): End of buffer TASK ERROR: start failed: QEMU...
  14. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Gen 2 isn't effected anyways, so I am of no help. Ignore me and carry on...
  15. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Does your Gen10 have Scalable Gen 1 or Gen 2 chips? Retbleed was patched in 5.19.
  16. J

    Use librbd instead of krbd for LXC?

    Through extensive testing with Optane cache drives, I have been able to increase Queue 1, IO-Depth 1, 4K writes by over 4X using RBD Persistent Write Log Cache. However, krbd does not support pwl, only librbd. In addition, librbd allows tuning such as "rbd_read_from_replica_policy = localize"...
  17. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    6.2 is RC1. Wayyyyyy too early to be asking for it.
  18. J

    CEPH extrem usage of RAM

    Without knowing if this node is also a Ceph MDS, Manager, and how many OSD’s it has it is impossible to say how much memory Ceph should be consuming. However, Ceph like all SDS takes heavy advantage of memory caching.
  19. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Testing on three nodes. Nothing has blown up (yet).
  20. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Any plans to enable MG-LRU in 6.1 builds?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!