Search results

  1. D

    Proxmox 9.0 Beta - kernel issues with vfio-pci on Mellanox 100G.

    Have you try to use that kernel: https://github.com/KrzysztofHajdamowicz/pve-kernel/releases ?
  2. D

    LVM shared + ISCSI Lun + lvmlockd + sanlock

    Can it be used with nvmeof over rdma (with multipath) to maximize iops performance? So I am just connecting the same nvmeof target on 2 or 3 proxmox nodes and set normal LVM?
  3. D

    Proxmox 9.0 Beta - kernel issues with vfio-pci on Mellanox 100G.

    New kernel from proxmox repo is working fine: 6.14.8-2-pve
  4. D

    Proxmox 9.0 Beta - kernel issues with vfio-pci on Mellanox 100G.

    Just use kernel: https://github.com/KrzysztofHajdamowicz/pve-kernel/releases
  5. D

    Proxmox 9.0 Beta - kernel issues with vfio-pci on Mellanox 100G.

    Kernel is broken with Mellanox 100G Connectx-5 VF on Proxmox 9.0 Beta. That card works fine only on Host without VF, so vfio-pci is broken in that release I think. kvm: -device vfio-pci,host=0000:81:00.1,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:81:00.1: error getting device from...
  6. D

    Proxmox VE 9.0 BETA released!

    Kernel is broken with Mellanox 100G Connectx-5 VF. kvm: -device vfio-pci,host=0000:81:00.1,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:81:00.1: error getting device from group 89: Permission denied Verify all devices in group 89 are bound to vfio-<bus> or pci-stub and not already in...
  7. D

    Improve virtio-blk device performance using iothread-vq-mapping

    Patch upgraded to PVE 9.0 Beta 1: https://github.com/dominiaz/iothread-vq-mapping
  8. D

    [Feature Request] Proxmox 9.0 - iothread-vq-mapping

    Yes, it should work with ZFS. You can test it.
  9. D

    Improve virtio-blk device performance using iothread-vq-mapping

    I made patch that enables advanced iothread-vq-mapping for virtio-blk devices in Proxmox VE 8.4. https://github.com/dominiaz/iothread-vq-mapping Just add eg. iothread_vq_mapping=8 to your virtio drive configuration.
  10. D

    [Feature Request] Proxmox 9.0 - iothread-vq-mapping

    I made a patch for Proxmox 8.4: https://github.com/dominiaz/iothread-vq-mapping Just add eg. iothread_vq_mapping=8 to your virtio drive configuration.
  11. D

    [Feature Request] Proxmox 9.0 - iothread-vq-mapping

    You have bad point of view. I have fast storage with 8 mln IOPS, so normal 150-200k IOPS per VM (with actual 1x iothread) is very, very bad. If you want sell 200k IOPS or 1mln IOPS then you sell it in fair price. You can always set 1-64 Iothreads per VM, but we need to have a choice.
  12. D

    [Feature Request] Proxmox 9.0 - iothread-vq-mapping

    https://blogs.oracle.com/linux/post/virtioblk-using-iothread-vq-mapping @bund69 proxmox tests: args: -object iothread,id=iothread0 -object iothread,id=iothread1 -object iothread,id=iothread2 -object iothread,id=iothread3 -object iothread,id=iothread4 -object iothread,id=iothread5 -object...
  13. D

    ZFS rpool incredible slow on Mirrored NVMEs

    Drop ZFS and use Xiraid Opus for best iops performance.
  14. D

    nvme IOPS 4k performance - same disk with diffrent Host = diffrent results.

    Host1: Epyc 7702P with 512GB RAM, Micron 7400 Pro 1,92 TB Host2: Core Ultra 7 265k with 128GB RAM, Micron 7400 Pro 1,92 TB Micron 7400 Pro 1,92TB mounted as DIR with XFS filesystem on Host. I have use fstrim -av before makeing tests. VM config: virtio0...
  15. D

    Graid SupremeRAID™ now supports Proxmox.

    Hey, Unfortunately SupremeRAID SE (Linux Driver 1.7.0 Beta) is not comptabile with Proxmox VE 8.2 (Kernel 6.8). I made Raid5 with 4x Micron 7400 Pro 1,92 TB and Raid0 with Seagate FireCuda 530. I`ve tried Graid with GPU RTX A2000 and RTX 3060. Everything works smooth - read/write/fio on...
  16. D

    ZFS 2.3.0 has been released, how long until its available?

    I will be happy to start testing on new topic. My hardware is 4x Micron 7400 1,92TB, 2x Micron 7400 3,84 TB and 2x DL380G10 connected by Mellanox card with 100G RDMA.
  17. D

    ZFS 2.3.0 has been released, how long until its available?

    I want to compare iops on 2 local nvme vs raidz1 zfs 2.3 direct i/o mirror on thiese nvme.
  18. D

    ZFS 2.3.0 has been released, how long until its available?

    Very interesting. How many iops did you get on Guest on node1 and node3?
  19. D

    Mark spam and send it to mail server with notification on PMG

    Ive set Mail filter to: Quarantine/Mark Spam (Level 3): - Active Objects - Modify Spam Level - Modify Spam Subject - Spam (Level 3) With that configuration when I go to Tracking Center then I see status accepted/delivered for Spamed emails. I need some extra notification in Tracking Center for...