Search results

  1. J

    QEMU 7.2 available on pvetest as of now

    Works when disabling persistent cache on the rbd images. Curiously only effects virtio-blk not virtio-scsi.
  2. J

    QEMU 7.2 available on pvetest as of now

    Ceph is healthy. CephRBD is Replica 3 on 3x nodes with 7x Samsung sm863 OSD per node (21 total). WAL/DB is Optane 900p. RBD persistent write-back cache is also on Optane. This only happens on virtio block. It does not happen on virtio-scsi. I am using librbd because krbd does not support...
  3. J

    QEMU 7.2 available on pvetest as of now

    I receive the following error when starting a VM. task started by HA resource agent kvm: rbd request failed: cmd 0 offset 0 bytes 540672 flags 0 task.ret -2 (No such file or directory) kvm: can't read block backend: No such file or directory TASK ERROR: start failed: QEMU exited with code 1
  4. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    I have a problem that just started with the updates pushed to the no-subscription repo overnight. QEMU won't start. task started by HA resource agent terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of_buffer' what(): End of buffer TASK ERROR: start failed: QEMU...
  5. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Gen 2 isn't effected anyways, so I am of no help. Ignore me and carry on...
  6. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Does your Gen10 have Scalable Gen 1 or Gen 2 chips? Retbleed was patched in 5.19.
  7. J

    Use librbd instead of krbd for LXC?

    Through extensive testing with Optane cache drives, I have been able to increase Queue 1, IO-Depth 1, 4K writes by over 4X using RBD Persistent Write Log Cache. However, krbd does not support pwl, only librbd. In addition, librbd allows tuning such as "rbd_read_from_replica_policy = localize"...
  8. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    6.2 is RC1. Wayyyyyy too early to be asking for it.
  9. J

    CEPH extrem usage of RAM

    Without knowing if this node is also a Ceph MDS, Manager, and how many OSD’s it has it is impossible to say how much memory Ceph should be consuming. However, Ceph like all SDS takes heavy advantage of memory caching.
  10. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Testing on three nodes. Nothing has blown up (yet).
  11. J

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Any plans to enable MG-LRU in 6.1 builds?
  12. J

    PVE ARM support for Ampere altra max

    They still can’t execute an x86_64 instruction set, so what do you plan to run at scale on an ARM server?
  13. J

    Does Proxmox 7.x vTPM take advantage of pTPM?

    Correct. vTPM won't utilize a physical TPM. The only purpose for a "real" TPM would be secure boot on the host, which I do not believe PVE (yet) supports.
  14. J

    Ceph 17.2 Quincy Available as Stable Release

    Proxmox does not (to my knowledge) bundle and deploy cephadm.
  15. J

    Ceph uses false osd_mclock_max_capacity_iops_ssd value

    Roughly the avg. I discounted major outlier values and adjusted to the most consistent results across the drive type. It has performed well using such a method. I have all types (NVMe, SSD, HDD).
  16. J

    Ceph uses false osd_mclock_max_capacity_iops_ssd value

    I benchmark all of my drives multiple times and then set a consistent value for all of the same type across the cluster. The variance between each is easily explained by differences at the time of benchmarking (which occurs automatically when you upgraded or installed Ceph)
  17. J

    Ceph uses false osd_mclock_max_capacity_iops_ssd value

    Here is the Ceph Manual. To set a custom mclock iops value, use the following command: ceph config set osd.N osd_mclock_max_capacity_iops_[hdd,ssd] <value> What type of drives are these?
  18. J

    Simple routine apt update && upgrade / migration mandatory?

    When QEMU and LXC packages change, you have to either migrate or restart the VM / Container to have them running on the newest package.
  19. J

    [TUTORIAL] NVIDIA vGPU on Proxmox VE 7.x

    Do vGPU drivers allow host access to the gpu, ie retain ability to use the GPU in LXC containers? I have gpu that are used in multiple LXC containers and it would be nice to also leverage vGPU in VM, but I’m not sure if the drivers permit both.