Recent content by dominiaz

  1. D

    Proxmox 9.0 Beta - kernel issues with vfio-pci on Mellanox 100G.

    Have you try to use that kernel: https://github.com/KrzysztofHajdamowicz/pve-kernel/releases ?
  2. D

    LVM shared + ISCSI Lun + lvmlockd + sanlock

    Can it be used with nvmeof over rdma (with multipath) to maximize iops performance? So I am just connecting the same nvmeof target on 2 or 3 proxmox nodes and set normal LVM?
  3. D

    Proxmox 9.0 Beta - kernel issues with vfio-pci on Mellanox 100G.

    New kernel from proxmox repo is working fine: 6.14.8-2-pve
  4. D

    Proxmox 9.0 Beta - kernel issues with vfio-pci on Mellanox 100G.

    Just use kernel: https://github.com/KrzysztofHajdamowicz/pve-kernel/releases
  5. D

    Proxmox 9.0 Beta - kernel issues with vfio-pci on Mellanox 100G.

    Kernel is broken with Mellanox 100G Connectx-5 VF on Proxmox 9.0 Beta. That card works fine only on Host without VF, so vfio-pci is broken in that release I think. kvm: -device vfio-pci,host=0000:81:00.1,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:81:00.1: error getting device from...
  6. D

    Proxmox VE 9.0 BETA released!

    Kernel is broken with Mellanox 100G Connectx-5 VF. kvm: -device vfio-pci,host=0000:81:00.1,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:81:00.1: error getting device from group 89: Permission denied Verify all devices in group 89 are bound to vfio-<bus> or pci-stub and not already in...
  7. D

    Improve virtio-blk device performance using iothread-vq-mapping

    Patch upgraded to PVE 9.0 Beta 1: https://github.com/dominiaz/iothread-vq-mapping
  8. D

    [Feature Request] Proxmox 9.0 - iothread-vq-mapping

    Yes, it should work with ZFS. You can test it.
  9. D

    Improve virtio-blk device performance using iothread-vq-mapping

    I made patch that enables advanced iothread-vq-mapping for virtio-blk devices in Proxmox VE 8.4. https://github.com/dominiaz/iothread-vq-mapping Just add eg. iothread_vq_mapping=8 to your virtio drive configuration.
  10. D

    [Feature Request] Proxmox 9.0 - iothread-vq-mapping

    I made a patch for Proxmox 8.4: https://github.com/dominiaz/iothread-vq-mapping Just add eg. iothread_vq_mapping=8 to your virtio drive configuration.
  11. D

    [Feature Request] Proxmox 9.0 - iothread-vq-mapping

    You have bad point of view. I have fast storage with 8 mln IOPS, so normal 150-200k IOPS per VM (with actual 1x iothread) is very, very bad. If you want sell 200k IOPS or 1mln IOPS then you sell it in fair price. You can always set 1-64 Iothreads per VM, but we need to have a choice.
  12. D

    [Feature Request] Proxmox 9.0 - iothread-vq-mapping

    https://blogs.oracle.com/linux/post/virtioblk-using-iothread-vq-mapping @bund69 proxmox tests: args: -object iothread,id=iothread0 -object iothread,id=iothread1 -object iothread,id=iothread2 -object iothread,id=iothread3 -object iothread,id=iothread4 -object iothread,id=iothread5 -object...
  13. D

    ZFS rpool incredible slow on Mirrored NVMEs

    Drop ZFS and use Xiraid Opus for best iops performance.
  14. D

    nvme IOPS 4k performance - same disk with diffrent Host = diffrent results.

    Host1: Epyc 7702P with 512GB RAM, Micron 7400 Pro 1,92 TB Host2: Core Ultra 7 265k with 128GB RAM, Micron 7400 Pro 1,92 TB Micron 7400 Pro 1,92TB mounted as DIR with XFS filesystem on Host. I have use fstrim -av before makeing tests. VM config: virtio0...