Search results

  1. T

    VM I/O Performance with Ceph Storage

    @ITT We have ordered 3 Enterprise NVMes each 1TB in size and now hoping the best.... And yes, we set the MTU to 9000 But - sorry for coming back to this thoughts again - I'm really not sure if something else is going completely wrong in the cluster and this "consumer nvmes are bad" - thing of...
  2. T

    VM I/O Performance with Ceph Storage

    7f:13.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder (rev 02) 7f:13.3 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder (rev 02)...
  3. T

    VM I/O Performance with Ceph Storage

    More than 16384 characters, have to split the output: # lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|LnkSta:" 00:00.0 Host bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 (rev 02) LnkSta: Speed unknown (downgraded), Width x0 (downgraded) 00:01.0 PCI bridge...
  4. T

    VM I/O Performance with Ceph Storage

    So here the missing Infos: Adapter Cards of this kind: https://www.amazon.de/Adapter-K%C3%BChlk%C3%B6rper-6amLifestyle-Adapterkarte-Support/dp/B07RZZ3TJG/ref=sr_1_3?keywords=pcie+nvme+adapter&qid=1673986009&sprefix=pcie%2Caps%2C196&sr=8-3 PCIe Slots they are running in:
  5. T

    VM I/O Performance with Ceph Storage

    Last but not least for completing the whole picture: fio write test of a VM Disk using Ceph Pool with the HDD SAS Disks as Storage: --> IOPS=7, BW=31.9KiB/s, lat=125.31ms IMHO bad values even for SAS HDDs, isn't it? Which again leads us to the assumption, that there is sth. wrong more...
  6. T

    VM I/O Performance with Ceph Storage

    When setting the VM Disk Cache from "No cache" to "Write Back" things even get worse: --> IOPS=74, BW=297KiB/s, lat=13.5ms We thought, "Write Back" would increase write performance... everything very confusing...
  7. T

    VM I/O Performance with Ceph Storage

    In the meantime we benchmarked a VM Disk using the NVMe Ceph Pool inside our Debian Testing VM with fio, and are a little bit surprised: #fio --ioengine=psync --filename=/dev/sdb --size=9G --time_based --name=fio --group_reporting --runtime=600 --direct=1 --sync=1 --rw=write --bs=4K...
  8. T

    VM I/O Performance with Ceph Storage

    Infos arn't missing, given here (https://forum.proxmox.com/threads/vm-i-o-performance-with-ceph-storage.120929/post-526311) -> The disks are those: Crucial P2 CT1000P2SSD8 (1TB) Crucial P2 CT2000P2SSD8 (2TB) Connected via PCIe Adapter Card to PCIe 4x Slots
  9. T

    VM I/O Performance with Ceph Storage

    Here the first benchmarking results: 1.) FIO on NVMEs (all have similar values, no matter if 1TB or 2TB): # fio --ioengine=libaio --filename=/dev/nvme2n1 --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio fio: (g=0): rw=write, bs=(R) 4096B-4096B...
  10. T

    VM I/O Performance with Ceph Storage

    @stepei: you anticipated our current questions which are going in exact the same direction :D Actually we are going through the Links / PDF from @shanreich's last post and wondering if it is really necessary for ceph to have enterprise SSD/NVMe which costs >1K€ per piece (the PDF is dated...
  11. T

    VM I/O Performance with Ceph Storage

    @Neobin Thx for your hints... in the meanwhile we also came across the fact that it is not the best idea to use consumer NVMes for ceph when it comes to performance, so far, so good... BUT: is it really realistic that the VM I/O performance (and notice: just when writing small files) when...
  12. T

    VM I/O Performance with Ceph Storage

    OK, then we will give it a try over night.. BTW: Just stumbled across this: https://github.com/rook/rook/issues/6964 There the Option "bdev_async_discard" beside "bdev_enable_discard" is mentioned and also set to "true". When being set to false in combination with "bdev_enable_discard" enabled...
  13. T

    VM I/O Performance with Ceph Storage

    @dragon2611 Do u have any Idea, how long it should take when things calm down again? Until now the latencies have horrible values: VMs with Disks at the NVMes are near to being unusable
  14. T

    VM I/O Performance with Ceph Storage

    OK, then we just wait a while...fingers crossed ;)
  15. T

    VM I/O Performance with Ceph Storage

    We just use commands like this to enable this Option at OSD with NVMe Disks: ceph config set osd.4 bdev_enable_discard true BUT now things even get much worse... immediatly latencies increases on these disks:
  16. T

    VM I/O Performance with Ceph Storage

    -> The disks are those: Crucial P2 CT1000P2SSD8 (1TB) Crucial P2 CT2000P2SSD8 (2TB) Connected via PCIe Adapter Card to PCIe 4x Slots -> iperf gave this: -> Setting the VM Disk Cache to "WriteBack" doesn't really change anything. BUT: Setting this to "WriteBack (unsafe)" massively increases...
  17. T

    VM I/O Performance with Ceph Storage

    @shanreich Thx for the hint, but this was the original setup we used till some days ago. (Meaning: we HAD the public network also at the 40Gb Nics). During the try - and - error investigations in the last weeks, this was the last we changed: Set the public network from 172.20.81.0/24 to...
  18. T

    VM I/O Performance with Ceph Storage

    (Continue: ) Test VM Config: Installing Gimp at this VM lasts 3-4 mins, just made a screencast of it and uploaded to a dropbox: https://www.dropbox.com/s/ws7hmxzdhpgtuaa/InstallGimp.webm?dl=0 Notice the "Extraction" or "Config" Phase... this is original speed, not SlowMo ;) BTW: For...